00:00:00.001 Started by upstream project "autotest-per-patch" build number 132578 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.035 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.036 The recommended git tool is: git 00:00:00.037 using credential 00000000-0000-0000-0000-000000000002 00:00:00.039 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.054 Fetching changes from the remote Git repository 00:00:00.061 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.074 Using shallow fetch with depth 1 00:00:00.074 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.074 > git --version # timeout=10 00:00:00.087 > git --version # 'git version 2.39.2' 00:00:00.087 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.110 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.110 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.324 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.336 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.347 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.347 > git config core.sparsecheckout # timeout=10 00:00:02.358 > git read-tree -mu HEAD # timeout=10 00:00:02.372 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.392 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.392 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.611 [Pipeline] Start of Pipeline 00:00:02.625 [Pipeline] library 00:00:02.626 Loading library shm_lib@master 00:00:02.627 Library shm_lib@master is cached. Copying from home. 00:00:02.648 [Pipeline] node 00:00:02.675 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.677 [Pipeline] { 00:00:02.688 [Pipeline] catchError 00:00:02.690 [Pipeline] { 00:00:02.708 [Pipeline] wrap 00:00:02.721 [Pipeline] { 00:00:02.730 [Pipeline] stage 00:00:02.732 [Pipeline] { (Prologue) 00:00:02.749 [Pipeline] echo 00:00:02.751 Node: VM-host-WFP7 00:00:02.759 [Pipeline] cleanWs 00:00:02.773 [WS-CLEANUP] Deleting project workspace... 00:00:02.773 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.783 [WS-CLEANUP] done 00:00:03.167 [Pipeline] setCustomBuildProperty 00:00:03.232 [Pipeline] httpRequest 00:00:03.655 [Pipeline] echo 00:00:03.656 Sorcerer 10.211.164.101 is alive 00:00:03.662 [Pipeline] retry 00:00:03.663 [Pipeline] { 00:00:03.671 [Pipeline] httpRequest 00:00:03.674 HttpMethod: GET 00:00:03.675 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.675 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.676 Response Code: HTTP/1.1 200 OK 00:00:03.677 Success: Status code 200 is in the accepted range: 200,404 00:00:03.677 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.847 [Pipeline] } 00:00:03.863 [Pipeline] // retry 00:00:03.870 [Pipeline] sh 00:00:04.155 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.169 [Pipeline] httpRequest 00:00:05.116 [Pipeline] echo 00:00:05.118 Sorcerer 10.211.164.101 is alive 00:00:05.127 [Pipeline] retry 00:00:05.129 [Pipeline] { 00:00:05.144 [Pipeline] httpRequest 00:00:05.149 HttpMethod: GET 00:00:05.150 URL: http://10.211.164.101/packages/spdk_24f0cb4c3f83c5e3773ceac60a95836862784b97.tar.gz 00:00:05.150 Sending request to url: http://10.211.164.101/packages/spdk_24f0cb4c3f83c5e3773ceac60a95836862784b97.tar.gz 00:00:05.151 Response Code: HTTP/1.1 200 OK 00:00:05.151 Success: Status code 200 is in the accepted range: 200,404 00:00:05.152 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_24f0cb4c3f83c5e3773ceac60a95836862784b97.tar.gz 00:00:25.306 [Pipeline] } 00:00:25.325 [Pipeline] // retry 00:00:25.333 [Pipeline] sh 00:00:25.619 + tar --no-same-owner -xf spdk_24f0cb4c3f83c5e3773ceac60a95836862784b97.tar.gz 00:00:28.176 [Pipeline] sh 00:00:28.467 + git -C spdk log --oneline -n5 00:00:28.467 24f0cb4c3 test/common: Make sure get_zoned_devs() picks all namespaces 00:00:28.467 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:00:28.467 5592070b3 doc: update nvmf_tracing.md 00:00:28.467 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:00:28.467 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:00:28.489 [Pipeline] writeFile 00:00:28.505 [Pipeline] sh 00:00:28.791 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:28.806 [Pipeline] sh 00:00:29.124 + cat autorun-spdk.conf 00:00:29.124 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.124 SPDK_RUN_ASAN=1 00:00:29.124 SPDK_RUN_UBSAN=1 00:00:29.124 SPDK_TEST_RAID=1 00:00:29.124 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:29.132 RUN_NIGHTLY=0 00:00:29.134 [Pipeline] } 00:00:29.148 [Pipeline] // stage 00:00:29.164 [Pipeline] stage 00:00:29.166 [Pipeline] { (Run VM) 00:00:29.179 [Pipeline] sh 00:00:29.463 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:29.463 + echo 'Start stage prepare_nvme.sh' 00:00:29.463 Start stage prepare_nvme.sh 00:00:29.463 + [[ -n 2 ]] 00:00:29.463 + disk_prefix=ex2 00:00:29.463 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:29.463 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:29.463 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:29.463 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.463 ++ SPDK_RUN_ASAN=1 00:00:29.463 ++ SPDK_RUN_UBSAN=1 00:00:29.463 ++ SPDK_TEST_RAID=1 00:00:29.463 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:29.463 ++ RUN_NIGHTLY=0 00:00:29.463 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:29.463 + nvme_files=() 00:00:29.463 + declare -A nvme_files 00:00:29.463 + backend_dir=/var/lib/libvirt/images/backends 00:00:29.463 + nvme_files['nvme.img']=5G 00:00:29.463 + nvme_files['nvme-cmb.img']=5G 00:00:29.463 + nvme_files['nvme-multi0.img']=4G 00:00:29.463 + nvme_files['nvme-multi1.img']=4G 00:00:29.463 + nvme_files['nvme-multi2.img']=4G 00:00:29.463 + nvme_files['nvme-openstack.img']=8G 00:00:29.463 + nvme_files['nvme-zns.img']=5G 00:00:29.463 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:29.463 + (( SPDK_TEST_FTL == 1 )) 00:00:29.464 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:29.464 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:29.464 + for nvme in "${!nvme_files[@]}" 00:00:29.464 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:29.464 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.464 + for nvme in "${!nvme_files[@]}" 00:00:29.464 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:29.464 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.464 + for nvme in "${!nvme_files[@]}" 00:00:29.464 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:29.464 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:29.464 + for nvme in "${!nvme_files[@]}" 00:00:29.464 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:29.464 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.464 + for nvme in "${!nvme_files[@]}" 00:00:29.464 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:29.464 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.464 + for nvme in "${!nvme_files[@]}" 00:00:29.464 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:29.464 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.464 + for nvme in "${!nvme_files[@]}" 00:00:29.464 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:29.724 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.724 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:29.724 + echo 'End stage prepare_nvme.sh' 00:00:29.724 End stage prepare_nvme.sh 00:00:29.739 [Pipeline] sh 00:00:30.029 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:30.029 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:00:30.029 00:00:30.029 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:30.029 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:30.029 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:30.029 HELP=0 00:00:30.029 DRY_RUN=0 00:00:30.029 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:00:30.029 NVME_DISKS_TYPE=nvme,nvme, 00:00:30.029 NVME_AUTO_CREATE=0 00:00:30.029 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:00:30.029 NVME_CMB=,, 00:00:30.029 NVME_PMR=,, 00:00:30.029 NVME_ZNS=,, 00:00:30.029 NVME_MS=,, 00:00:30.029 NVME_FDP=,, 00:00:30.029 SPDK_VAGRANT_DISTRO=fedora39 00:00:30.029 SPDK_VAGRANT_VMCPU=10 00:00:30.029 SPDK_VAGRANT_VMRAM=12288 00:00:30.029 SPDK_VAGRANT_PROVIDER=libvirt 00:00:30.029 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:30.029 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:30.029 SPDK_OPENSTACK_NETWORK=0 00:00:30.029 VAGRANT_PACKAGE_BOX=0 00:00:30.029 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:30.029 FORCE_DISTRO=true 00:00:30.029 VAGRANT_BOX_VERSION= 00:00:30.029 EXTRA_VAGRANTFILES= 00:00:30.029 NIC_MODEL=virtio 00:00:30.029 00:00:30.029 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:30.029 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:31.939 Bringing machine 'default' up with 'libvirt' provider... 00:00:32.510 ==> default: Creating image (snapshot of base box volume). 00:00:32.510 ==> default: Creating domain with the following settings... 00:00:32.510 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732707538_53aedf39fe8c16869cac 00:00:32.510 ==> default: -- Domain type: kvm 00:00:32.510 ==> default: -- Cpus: 10 00:00:32.510 ==> default: -- Feature: acpi 00:00:32.510 ==> default: -- Feature: apic 00:00:32.510 ==> default: -- Feature: pae 00:00:32.510 ==> default: -- Memory: 12288M 00:00:32.510 ==> default: -- Memory Backing: hugepages: 00:00:32.510 ==> default: -- Management MAC: 00:00:32.510 ==> default: -- Loader: 00:00:32.510 ==> default: -- Nvram: 00:00:32.510 ==> default: -- Base box: spdk/fedora39 00:00:32.510 ==> default: -- Storage pool: default 00:00:32.510 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732707538_53aedf39fe8c16869cac.img (20G) 00:00:32.510 ==> default: -- Volume Cache: default 00:00:32.510 ==> default: -- Kernel: 00:00:32.510 ==> default: -- Initrd: 00:00:32.510 ==> default: -- Graphics Type: vnc 00:00:32.510 ==> default: -- Graphics Port: -1 00:00:32.510 ==> default: -- Graphics IP: 127.0.0.1 00:00:32.510 ==> default: -- Graphics Password: Not defined 00:00:32.510 ==> default: -- Video Type: cirrus 00:00:32.510 ==> default: -- Video VRAM: 9216 00:00:32.510 ==> default: -- Sound Type: 00:00:32.510 ==> default: -- Keymap: en-us 00:00:32.510 ==> default: -- TPM Path: 00:00:32.510 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:32.510 ==> default: -- Command line args: 00:00:32.510 ==> default: -> value=-device, 00:00:32.510 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:32.510 ==> default: -> value=-drive, 00:00:32.510 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:00:32.510 ==> default: -> value=-device, 00:00:32.510 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.510 ==> default: -> value=-device, 00:00:32.510 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:32.510 ==> default: -> value=-drive, 00:00:32.510 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:32.510 ==> default: -> value=-device, 00:00:32.510 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.510 ==> default: -> value=-drive, 00:00:32.510 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:32.510 ==> default: -> value=-device, 00:00:32.510 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.510 ==> default: -> value=-drive, 00:00:32.510 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:32.510 ==> default: -> value=-device, 00:00:32.510 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.510 ==> default: Creating shared folders metadata... 00:00:32.510 ==> default: Starting domain. 00:00:34.417 ==> default: Waiting for domain to get an IP address... 00:00:52.518 ==> default: Waiting for SSH to become available... 00:00:52.518 ==> default: Configuring and enabling network interfaces... 00:00:57.810 default: SSH address: 192.168.121.214:22 00:00:57.810 default: SSH username: vagrant 00:00:57.810 default: SSH auth method: private key 00:00:59.718 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:07.865 ==> default: Mounting SSHFS shared folder... 00:01:10.402 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:10.402 ==> default: Checking Mount.. 00:01:11.778 ==> default: Folder Successfully Mounted! 00:01:11.778 ==> default: Running provisioner: file... 00:01:12.718 default: ~/.gitconfig => .gitconfig 00:01:13.294 00:01:13.294 SUCCESS! 00:01:13.294 00:01:13.294 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:13.294 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:13.294 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:13.294 00:01:13.303 [Pipeline] } 00:01:13.320 [Pipeline] // stage 00:01:13.330 [Pipeline] dir 00:01:13.331 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:13.333 [Pipeline] { 00:01:13.346 [Pipeline] catchError 00:01:13.347 [Pipeline] { 00:01:13.360 [Pipeline] sh 00:01:13.645 + vagrant ssh-config --host vagrant 00:01:13.645 + sed -ne /^Host/,$p 00:01:13.645 + tee ssh_conf 00:01:16.198 Host vagrant 00:01:16.198 HostName 192.168.121.214 00:01:16.198 User vagrant 00:01:16.198 Port 22 00:01:16.198 UserKnownHostsFile /dev/null 00:01:16.198 StrictHostKeyChecking no 00:01:16.198 PasswordAuthentication no 00:01:16.198 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:16.198 IdentitiesOnly yes 00:01:16.198 LogLevel FATAL 00:01:16.198 ForwardAgent yes 00:01:16.198 ForwardX11 yes 00:01:16.198 00:01:16.212 [Pipeline] withEnv 00:01:16.214 [Pipeline] { 00:01:16.226 [Pipeline] sh 00:01:16.507 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:16.507 source /etc/os-release 00:01:16.507 [[ -e /image.version ]] && img=$(< /image.version) 00:01:16.507 # Minimal, systemd-like check. 00:01:16.507 if [[ -e /.dockerenv ]]; then 00:01:16.508 # Clear garbage from the node's name: 00:01:16.508 # agt-er_autotest_547-896 -> autotest_547-896 00:01:16.508 # $HOSTNAME is the actual container id 00:01:16.508 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:16.508 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:16.508 # We can assume this is a mount from a host where container is running, 00:01:16.508 # so fetch its hostname to easily identify the target swarm worker. 00:01:16.508 container="$(< /etc/hostname) ($agent)" 00:01:16.508 else 00:01:16.508 # Fallback 00:01:16.508 container=$agent 00:01:16.508 fi 00:01:16.508 fi 00:01:16.508 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:16.508 00:01:16.781 [Pipeline] } 00:01:16.798 [Pipeline] // withEnv 00:01:16.807 [Pipeline] setCustomBuildProperty 00:01:16.823 [Pipeline] stage 00:01:16.826 [Pipeline] { (Tests) 00:01:16.843 [Pipeline] sh 00:01:17.126 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:17.401 [Pipeline] sh 00:01:17.684 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:17.958 [Pipeline] timeout 00:01:17.958 Timeout set to expire in 1 hr 30 min 00:01:17.960 [Pipeline] { 00:01:17.975 [Pipeline] sh 00:01:18.261 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:18.832 HEAD is now at 24f0cb4c3 test/common: Make sure get_zoned_devs() picks all namespaces 00:01:18.844 [Pipeline] sh 00:01:19.127 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:19.402 [Pipeline] sh 00:01:19.684 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:19.960 [Pipeline] sh 00:01:20.242 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:20.502 ++ readlink -f spdk_repo 00:01:20.502 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:20.502 + [[ -n /home/vagrant/spdk_repo ]] 00:01:20.502 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:20.502 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:20.502 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:20.502 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:20.502 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:20.502 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:20.502 + cd /home/vagrant/spdk_repo 00:01:20.502 + source /etc/os-release 00:01:20.502 ++ NAME='Fedora Linux' 00:01:20.502 ++ VERSION='39 (Cloud Edition)' 00:01:20.502 ++ ID=fedora 00:01:20.502 ++ VERSION_ID=39 00:01:20.502 ++ VERSION_CODENAME= 00:01:20.502 ++ PLATFORM_ID=platform:f39 00:01:20.502 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:20.502 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.502 ++ LOGO=fedora-logo-icon 00:01:20.502 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:20.502 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.502 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:20.502 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.502 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.502 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.502 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:20.502 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.502 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:20.502 ++ SUPPORT_END=2024-11-12 00:01:20.502 ++ VARIANT='Cloud Edition' 00:01:20.502 ++ VARIANT_ID=cloud 00:01:20.502 + uname -a 00:01:20.502 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:20.502 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:21.087 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:21.087 Hugepages 00:01:21.087 node hugesize free / total 00:01:21.087 node0 1048576kB 0 / 0 00:01:21.087 node0 2048kB 0 / 0 00:01:21.087 00:01:21.087 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:21.087 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:21.087 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:21.087 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:21.087 + rm -f /tmp/spdk-ld-path 00:01:21.087 + source autorun-spdk.conf 00:01:21.087 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.087 ++ SPDK_RUN_ASAN=1 00:01:21.087 ++ SPDK_RUN_UBSAN=1 00:01:21.087 ++ SPDK_TEST_RAID=1 00:01:21.087 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.087 ++ RUN_NIGHTLY=0 00:01:21.087 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:21.087 + [[ -n '' ]] 00:01:21.087 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:21.087 + for M in /var/spdk/build-*-manifest.txt 00:01:21.087 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:21.087 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:21.087 + for M in /var/spdk/build-*-manifest.txt 00:01:21.087 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:21.087 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:21.087 + for M in /var/spdk/build-*-manifest.txt 00:01:21.087 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:21.087 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:21.087 ++ uname 00:01:21.087 + [[ Linux == \L\i\n\u\x ]] 00:01:21.087 + sudo dmesg -T 00:01:21.347 + sudo dmesg --clear 00:01:21.347 + dmesg_pid=5436 00:01:21.347 + [[ Fedora Linux == FreeBSD ]] 00:01:21.347 + sudo dmesg -Tw 00:01:21.347 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.347 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.347 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:21.347 + [[ -x /usr/src/fio-static/fio ]] 00:01:21.347 + export FIO_BIN=/usr/src/fio-static/fio 00:01:21.347 + FIO_BIN=/usr/src/fio-static/fio 00:01:21.347 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:21.347 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:21.347 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:21.347 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.347 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.347 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:21.347 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.347 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.347 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:21.347 11:39:47 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:21.347 11:39:47 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:21.347 11:39:47 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.348 11:39:47 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:21.348 11:39:47 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:21.348 11:39:47 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:21.348 11:39:47 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.348 11:39:47 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:21.348 11:39:47 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:21.348 11:39:47 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:21.608 11:39:47 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:21.608 11:39:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:21.608 11:39:47 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:21.608 11:39:47 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:21.608 11:39:47 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:21.608 11:39:47 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:21.608 11:39:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.608 11:39:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.608 11:39:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.608 11:39:47 -- paths/export.sh@5 -- $ export PATH 00:01:21.608 11:39:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.608 11:39:47 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:21.608 11:39:47 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:21.608 11:39:47 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732707587.XXXXXX 00:01:21.608 11:39:47 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732707587.PWUaB1 00:01:21.608 11:39:47 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:21.608 11:39:47 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:21.608 11:39:47 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:21.608 11:39:47 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:21.608 11:39:47 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:21.608 11:39:47 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:21.608 11:39:47 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:21.608 11:39:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.608 11:39:47 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:21.608 11:39:47 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:21.608 11:39:47 -- pm/common@17 -- $ local monitor 00:01:21.608 11:39:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.608 11:39:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.608 11:39:47 -- pm/common@25 -- $ sleep 1 00:01:21.608 11:39:47 -- pm/common@21 -- $ date +%s 00:01:21.608 11:39:47 -- pm/common@21 -- $ date +%s 00:01:21.608 11:39:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732707587 00:01:21.608 11:39:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732707587 00:01:21.608 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732707587_collect-vmstat.pm.log 00:01:21.608 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732707587_collect-cpu-load.pm.log 00:01:22.547 11:39:48 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:22.547 11:39:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:22.547 11:39:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:22.547 11:39:48 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:22.547 11:39:48 -- spdk/autobuild.sh@16 -- $ date -u 00:01:22.547 Wed Nov 27 11:39:48 AM UTC 2024 00:01:22.547 11:39:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:22.547 v25.01-pre-272-g24f0cb4c3 00:01:22.547 11:39:48 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:22.547 11:39:48 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:22.547 11:39:48 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:22.547 11:39:48 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:22.547 11:39:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.547 ************************************ 00:01:22.547 START TEST asan 00:01:22.547 ************************************ 00:01:22.547 using asan 00:01:22.547 11:39:48 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:22.547 00:01:22.547 real 0m0.001s 00:01:22.547 user 0m0.001s 00:01:22.547 sys 0m0.000s 00:01:22.547 11:39:48 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:22.547 11:39:48 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:22.547 ************************************ 00:01:22.547 END TEST asan 00:01:22.547 ************************************ 00:01:22.547 11:39:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:22.547 11:39:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:22.547 11:39:48 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:22.547 11:39:48 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:22.547 11:39:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.547 ************************************ 00:01:22.547 START TEST ubsan 00:01:22.547 ************************************ 00:01:22.547 using ubsan 00:01:22.547 11:39:48 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:22.547 00:01:22.547 real 0m0.000s 00:01:22.547 user 0m0.000s 00:01:22.547 sys 0m0.000s 00:01:22.547 11:39:48 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:22.547 11:39:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:22.547 ************************************ 00:01:22.547 END TEST ubsan 00:01:22.547 ************************************ 00:01:22.806 11:39:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:22.806 11:39:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:22.806 11:39:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:22.806 11:39:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:22.806 11:39:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:22.806 11:39:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:22.806 11:39:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:22.806 11:39:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:22.806 11:39:48 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:22.806 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:22.806 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:23.374 Using 'verbs' RDMA provider 00:01:39.198 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:54.167 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:54.740 Creating mk/config.mk...done. 00:01:54.740 Creating mk/cc.flags.mk...done. 00:01:54.740 Type 'make' to build. 00:01:54.740 11:40:21 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:54.740 11:40:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:54.740 11:40:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:54.740 11:40:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.740 ************************************ 00:01:54.740 START TEST make 00:01:54.740 ************************************ 00:01:54.740 11:40:21 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:55.311 make[1]: Nothing to be done for 'all'. 00:02:05.306 The Meson build system 00:02:05.306 Version: 1.5.0 00:02:05.306 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:05.306 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:05.306 Build type: native build 00:02:05.306 Program cat found: YES (/usr/bin/cat) 00:02:05.306 Project name: DPDK 00:02:05.306 Project version: 24.03.0 00:02:05.306 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:05.306 C linker for the host machine: cc ld.bfd 2.40-14 00:02:05.306 Host machine cpu family: x86_64 00:02:05.306 Host machine cpu: x86_64 00:02:05.306 Message: ## Building in Developer Mode ## 00:02:05.306 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:05.306 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:05.306 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:05.306 Program python3 found: YES (/usr/bin/python3) 00:02:05.306 Program cat found: YES (/usr/bin/cat) 00:02:05.306 Compiler for C supports arguments -march=native: YES 00:02:05.306 Checking for size of "void *" : 8 00:02:05.306 Checking for size of "void *" : 8 (cached) 00:02:05.306 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:05.306 Library m found: YES 00:02:05.306 Library numa found: YES 00:02:05.306 Has header "numaif.h" : YES 00:02:05.306 Library fdt found: NO 00:02:05.306 Library execinfo found: NO 00:02:05.306 Has header "execinfo.h" : YES 00:02:05.306 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:05.306 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:05.306 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:05.306 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:05.306 Run-time dependency openssl found: YES 3.1.1 00:02:05.306 Run-time dependency libpcap found: YES 1.10.4 00:02:05.306 Has header "pcap.h" with dependency libpcap: YES 00:02:05.306 Compiler for C supports arguments -Wcast-qual: YES 00:02:05.306 Compiler for C supports arguments -Wdeprecated: YES 00:02:05.306 Compiler for C supports arguments -Wformat: YES 00:02:05.306 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:05.306 Compiler for C supports arguments -Wformat-security: NO 00:02:05.306 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:05.306 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:05.306 Compiler for C supports arguments -Wnested-externs: YES 00:02:05.306 Compiler for C supports arguments -Wold-style-definition: YES 00:02:05.306 Compiler for C supports arguments -Wpointer-arith: YES 00:02:05.306 Compiler for C supports arguments -Wsign-compare: YES 00:02:05.306 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:05.306 Compiler for C supports arguments -Wundef: YES 00:02:05.306 Compiler for C supports arguments -Wwrite-strings: YES 00:02:05.306 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:05.306 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:05.306 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:05.306 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:05.306 Program objdump found: YES (/usr/bin/objdump) 00:02:05.306 Compiler for C supports arguments -mavx512f: YES 00:02:05.306 Checking if "AVX512 checking" compiles: YES 00:02:05.306 Fetching value of define "__SSE4_2__" : 1 00:02:05.306 Fetching value of define "__AES__" : 1 00:02:05.306 Fetching value of define "__AVX__" : 1 00:02:05.306 Fetching value of define "__AVX2__" : 1 00:02:05.306 Fetching value of define "__AVX512BW__" : 1 00:02:05.306 Fetching value of define "__AVX512CD__" : 1 00:02:05.306 Fetching value of define "__AVX512DQ__" : 1 00:02:05.306 Fetching value of define "__AVX512F__" : 1 00:02:05.306 Fetching value of define "__AVX512VL__" : 1 00:02:05.306 Fetching value of define "__PCLMUL__" : 1 00:02:05.306 Fetching value of define "__RDRND__" : 1 00:02:05.306 Fetching value of define "__RDSEED__" : 1 00:02:05.306 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:05.306 Fetching value of define "__znver1__" : (undefined) 00:02:05.306 Fetching value of define "__znver2__" : (undefined) 00:02:05.306 Fetching value of define "__znver3__" : (undefined) 00:02:05.306 Fetching value of define "__znver4__" : (undefined) 00:02:05.306 Library asan found: YES 00:02:05.306 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:05.306 Message: lib/log: Defining dependency "log" 00:02:05.306 Message: lib/kvargs: Defining dependency "kvargs" 00:02:05.306 Message: lib/telemetry: Defining dependency "telemetry" 00:02:05.306 Library rt found: YES 00:02:05.306 Checking for function "getentropy" : NO 00:02:05.306 Message: lib/eal: Defining dependency "eal" 00:02:05.306 Message: lib/ring: Defining dependency "ring" 00:02:05.306 Message: lib/rcu: Defining dependency "rcu" 00:02:05.306 Message: lib/mempool: Defining dependency "mempool" 00:02:05.306 Message: lib/mbuf: Defining dependency "mbuf" 00:02:05.306 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:05.306 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:05.306 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:05.306 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:05.306 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:05.306 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:05.306 Compiler for C supports arguments -mpclmul: YES 00:02:05.306 Compiler for C supports arguments -maes: YES 00:02:05.306 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:05.307 Compiler for C supports arguments -mavx512bw: YES 00:02:05.307 Compiler for C supports arguments -mavx512dq: YES 00:02:05.307 Compiler for C supports arguments -mavx512vl: YES 00:02:05.307 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:05.307 Compiler for C supports arguments -mavx2: YES 00:02:05.307 Compiler for C supports arguments -mavx: YES 00:02:05.307 Message: lib/net: Defining dependency "net" 00:02:05.307 Message: lib/meter: Defining dependency "meter" 00:02:05.307 Message: lib/ethdev: Defining dependency "ethdev" 00:02:05.307 Message: lib/pci: Defining dependency "pci" 00:02:05.307 Message: lib/cmdline: Defining dependency "cmdline" 00:02:05.307 Message: lib/hash: Defining dependency "hash" 00:02:05.307 Message: lib/timer: Defining dependency "timer" 00:02:05.307 Message: lib/compressdev: Defining dependency "compressdev" 00:02:05.307 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:05.307 Message: lib/dmadev: Defining dependency "dmadev" 00:02:05.307 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:05.307 Message: lib/power: Defining dependency "power" 00:02:05.307 Message: lib/reorder: Defining dependency "reorder" 00:02:05.307 Message: lib/security: Defining dependency "security" 00:02:05.307 Has header "linux/userfaultfd.h" : YES 00:02:05.307 Has header "linux/vduse.h" : YES 00:02:05.307 Message: lib/vhost: Defining dependency "vhost" 00:02:05.307 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:05.307 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:05.307 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:05.307 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:05.307 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:05.307 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:05.307 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:05.307 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:05.307 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:05.307 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:05.307 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:05.307 Configuring doxy-api-html.conf using configuration 00:02:05.307 Configuring doxy-api-man.conf using configuration 00:02:05.307 Program mandb found: YES (/usr/bin/mandb) 00:02:05.307 Program sphinx-build found: NO 00:02:05.307 Configuring rte_build_config.h using configuration 00:02:05.307 Message: 00:02:05.307 ================= 00:02:05.307 Applications Enabled 00:02:05.307 ================= 00:02:05.307 00:02:05.307 apps: 00:02:05.307 00:02:05.307 00:02:05.307 Message: 00:02:05.307 ================= 00:02:05.307 Libraries Enabled 00:02:05.307 ================= 00:02:05.307 00:02:05.307 libs: 00:02:05.307 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:05.307 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:05.307 cryptodev, dmadev, power, reorder, security, vhost, 00:02:05.307 00:02:05.307 Message: 00:02:05.307 =============== 00:02:05.307 Drivers Enabled 00:02:05.307 =============== 00:02:05.307 00:02:05.307 common: 00:02:05.307 00:02:05.307 bus: 00:02:05.307 pci, vdev, 00:02:05.307 mempool: 00:02:05.307 ring, 00:02:05.307 dma: 00:02:05.307 00:02:05.307 net: 00:02:05.307 00:02:05.307 crypto: 00:02:05.307 00:02:05.307 compress: 00:02:05.307 00:02:05.307 vdpa: 00:02:05.307 00:02:05.307 00:02:05.307 Message: 00:02:05.307 ================= 00:02:05.307 Content Skipped 00:02:05.307 ================= 00:02:05.307 00:02:05.307 apps: 00:02:05.307 dumpcap: explicitly disabled via build config 00:02:05.307 graph: explicitly disabled via build config 00:02:05.307 pdump: explicitly disabled via build config 00:02:05.307 proc-info: explicitly disabled via build config 00:02:05.307 test-acl: explicitly disabled via build config 00:02:05.307 test-bbdev: explicitly disabled via build config 00:02:05.307 test-cmdline: explicitly disabled via build config 00:02:05.307 test-compress-perf: explicitly disabled via build config 00:02:05.307 test-crypto-perf: explicitly disabled via build config 00:02:05.307 test-dma-perf: explicitly disabled via build config 00:02:05.307 test-eventdev: explicitly disabled via build config 00:02:05.307 test-fib: explicitly disabled via build config 00:02:05.307 test-flow-perf: explicitly disabled via build config 00:02:05.307 test-gpudev: explicitly disabled via build config 00:02:05.307 test-mldev: explicitly disabled via build config 00:02:05.307 test-pipeline: explicitly disabled via build config 00:02:05.307 test-pmd: explicitly disabled via build config 00:02:05.307 test-regex: explicitly disabled via build config 00:02:05.307 test-sad: explicitly disabled via build config 00:02:05.307 test-security-perf: explicitly disabled via build config 00:02:05.307 00:02:05.307 libs: 00:02:05.307 argparse: explicitly disabled via build config 00:02:05.307 metrics: explicitly disabled via build config 00:02:05.307 acl: explicitly disabled via build config 00:02:05.307 bbdev: explicitly disabled via build config 00:02:05.307 bitratestats: explicitly disabled via build config 00:02:05.307 bpf: explicitly disabled via build config 00:02:05.307 cfgfile: explicitly disabled via build config 00:02:05.307 distributor: explicitly disabled via build config 00:02:05.307 efd: explicitly disabled via build config 00:02:05.307 eventdev: explicitly disabled via build config 00:02:05.307 dispatcher: explicitly disabled via build config 00:02:05.307 gpudev: explicitly disabled via build config 00:02:05.307 gro: explicitly disabled via build config 00:02:05.307 gso: explicitly disabled via build config 00:02:05.307 ip_frag: explicitly disabled via build config 00:02:05.307 jobstats: explicitly disabled via build config 00:02:05.307 latencystats: explicitly disabled via build config 00:02:05.307 lpm: explicitly disabled via build config 00:02:05.307 member: explicitly disabled via build config 00:02:05.307 pcapng: explicitly disabled via build config 00:02:05.307 rawdev: explicitly disabled via build config 00:02:05.307 regexdev: explicitly disabled via build config 00:02:05.307 mldev: explicitly disabled via build config 00:02:05.307 rib: explicitly disabled via build config 00:02:05.307 sched: explicitly disabled via build config 00:02:05.307 stack: explicitly disabled via build config 00:02:05.307 ipsec: explicitly disabled via build config 00:02:05.307 pdcp: explicitly disabled via build config 00:02:05.307 fib: explicitly disabled via build config 00:02:05.307 port: explicitly disabled via build config 00:02:05.307 pdump: explicitly disabled via build config 00:02:05.307 table: explicitly disabled via build config 00:02:05.307 pipeline: explicitly disabled via build config 00:02:05.307 graph: explicitly disabled via build config 00:02:05.307 node: explicitly disabled via build config 00:02:05.307 00:02:05.307 drivers: 00:02:05.307 common/cpt: not in enabled drivers build config 00:02:05.307 common/dpaax: not in enabled drivers build config 00:02:05.307 common/iavf: not in enabled drivers build config 00:02:05.307 common/idpf: not in enabled drivers build config 00:02:05.307 common/ionic: not in enabled drivers build config 00:02:05.307 common/mvep: not in enabled drivers build config 00:02:05.307 common/octeontx: not in enabled drivers build config 00:02:05.307 bus/auxiliary: not in enabled drivers build config 00:02:05.307 bus/cdx: not in enabled drivers build config 00:02:05.307 bus/dpaa: not in enabled drivers build config 00:02:05.307 bus/fslmc: not in enabled drivers build config 00:02:05.307 bus/ifpga: not in enabled drivers build config 00:02:05.307 bus/platform: not in enabled drivers build config 00:02:05.307 bus/uacce: not in enabled drivers build config 00:02:05.307 bus/vmbus: not in enabled drivers build config 00:02:05.307 common/cnxk: not in enabled drivers build config 00:02:05.307 common/mlx5: not in enabled drivers build config 00:02:05.307 common/nfp: not in enabled drivers build config 00:02:05.307 common/nitrox: not in enabled drivers build config 00:02:05.307 common/qat: not in enabled drivers build config 00:02:05.307 common/sfc_efx: not in enabled drivers build config 00:02:05.307 mempool/bucket: not in enabled drivers build config 00:02:05.307 mempool/cnxk: not in enabled drivers build config 00:02:05.307 mempool/dpaa: not in enabled drivers build config 00:02:05.307 mempool/dpaa2: not in enabled drivers build config 00:02:05.308 mempool/octeontx: not in enabled drivers build config 00:02:05.308 mempool/stack: not in enabled drivers build config 00:02:05.308 dma/cnxk: not in enabled drivers build config 00:02:05.308 dma/dpaa: not in enabled drivers build config 00:02:05.308 dma/dpaa2: not in enabled drivers build config 00:02:05.308 dma/hisilicon: not in enabled drivers build config 00:02:05.308 dma/idxd: not in enabled drivers build config 00:02:05.308 dma/ioat: not in enabled drivers build config 00:02:05.308 dma/skeleton: not in enabled drivers build config 00:02:05.308 net/af_packet: not in enabled drivers build config 00:02:05.308 net/af_xdp: not in enabled drivers build config 00:02:05.308 net/ark: not in enabled drivers build config 00:02:05.308 net/atlantic: not in enabled drivers build config 00:02:05.308 net/avp: not in enabled drivers build config 00:02:05.308 net/axgbe: not in enabled drivers build config 00:02:05.308 net/bnx2x: not in enabled drivers build config 00:02:05.308 net/bnxt: not in enabled drivers build config 00:02:05.308 net/bonding: not in enabled drivers build config 00:02:05.308 net/cnxk: not in enabled drivers build config 00:02:05.308 net/cpfl: not in enabled drivers build config 00:02:05.308 net/cxgbe: not in enabled drivers build config 00:02:05.308 net/dpaa: not in enabled drivers build config 00:02:05.308 net/dpaa2: not in enabled drivers build config 00:02:05.308 net/e1000: not in enabled drivers build config 00:02:05.308 net/ena: not in enabled drivers build config 00:02:05.308 net/enetc: not in enabled drivers build config 00:02:05.308 net/enetfec: not in enabled drivers build config 00:02:05.308 net/enic: not in enabled drivers build config 00:02:05.308 net/failsafe: not in enabled drivers build config 00:02:05.308 net/fm10k: not in enabled drivers build config 00:02:05.308 net/gve: not in enabled drivers build config 00:02:05.308 net/hinic: not in enabled drivers build config 00:02:05.308 net/hns3: not in enabled drivers build config 00:02:05.308 net/i40e: not in enabled drivers build config 00:02:05.308 net/iavf: not in enabled drivers build config 00:02:05.308 net/ice: not in enabled drivers build config 00:02:05.308 net/idpf: not in enabled drivers build config 00:02:05.308 net/igc: not in enabled drivers build config 00:02:05.308 net/ionic: not in enabled drivers build config 00:02:05.308 net/ipn3ke: not in enabled drivers build config 00:02:05.308 net/ixgbe: not in enabled drivers build config 00:02:05.308 net/mana: not in enabled drivers build config 00:02:05.308 net/memif: not in enabled drivers build config 00:02:05.308 net/mlx4: not in enabled drivers build config 00:02:05.308 net/mlx5: not in enabled drivers build config 00:02:05.308 net/mvneta: not in enabled drivers build config 00:02:05.308 net/mvpp2: not in enabled drivers build config 00:02:05.308 net/netvsc: not in enabled drivers build config 00:02:05.308 net/nfb: not in enabled drivers build config 00:02:05.308 net/nfp: not in enabled drivers build config 00:02:05.308 net/ngbe: not in enabled drivers build config 00:02:05.308 net/null: not in enabled drivers build config 00:02:05.308 net/octeontx: not in enabled drivers build config 00:02:05.308 net/octeon_ep: not in enabled drivers build config 00:02:05.308 net/pcap: not in enabled drivers build config 00:02:05.308 net/pfe: not in enabled drivers build config 00:02:05.308 net/qede: not in enabled drivers build config 00:02:05.308 net/ring: not in enabled drivers build config 00:02:05.308 net/sfc: not in enabled drivers build config 00:02:05.308 net/softnic: not in enabled drivers build config 00:02:05.308 net/tap: not in enabled drivers build config 00:02:05.308 net/thunderx: not in enabled drivers build config 00:02:05.308 net/txgbe: not in enabled drivers build config 00:02:05.308 net/vdev_netvsc: not in enabled drivers build config 00:02:05.308 net/vhost: not in enabled drivers build config 00:02:05.308 net/virtio: not in enabled drivers build config 00:02:05.308 net/vmxnet3: not in enabled drivers build config 00:02:05.308 raw/*: missing internal dependency, "rawdev" 00:02:05.308 crypto/armv8: not in enabled drivers build config 00:02:05.308 crypto/bcmfs: not in enabled drivers build config 00:02:05.308 crypto/caam_jr: not in enabled drivers build config 00:02:05.308 crypto/ccp: not in enabled drivers build config 00:02:05.308 crypto/cnxk: not in enabled drivers build config 00:02:05.308 crypto/dpaa_sec: not in enabled drivers build config 00:02:05.308 crypto/dpaa2_sec: not in enabled drivers build config 00:02:05.308 crypto/ipsec_mb: not in enabled drivers build config 00:02:05.308 crypto/mlx5: not in enabled drivers build config 00:02:05.308 crypto/mvsam: not in enabled drivers build config 00:02:05.308 crypto/nitrox: not in enabled drivers build config 00:02:05.308 crypto/null: not in enabled drivers build config 00:02:05.308 crypto/octeontx: not in enabled drivers build config 00:02:05.308 crypto/openssl: not in enabled drivers build config 00:02:05.308 crypto/scheduler: not in enabled drivers build config 00:02:05.308 crypto/uadk: not in enabled drivers build config 00:02:05.308 crypto/virtio: not in enabled drivers build config 00:02:05.308 compress/isal: not in enabled drivers build config 00:02:05.308 compress/mlx5: not in enabled drivers build config 00:02:05.308 compress/nitrox: not in enabled drivers build config 00:02:05.308 compress/octeontx: not in enabled drivers build config 00:02:05.308 compress/zlib: not in enabled drivers build config 00:02:05.308 regex/*: missing internal dependency, "regexdev" 00:02:05.308 ml/*: missing internal dependency, "mldev" 00:02:05.308 vdpa/ifc: not in enabled drivers build config 00:02:05.308 vdpa/mlx5: not in enabled drivers build config 00:02:05.308 vdpa/nfp: not in enabled drivers build config 00:02:05.308 vdpa/sfc: not in enabled drivers build config 00:02:05.308 event/*: missing internal dependency, "eventdev" 00:02:05.308 baseband/*: missing internal dependency, "bbdev" 00:02:05.308 gpu/*: missing internal dependency, "gpudev" 00:02:05.308 00:02:05.308 00:02:05.308 Build targets in project: 85 00:02:05.308 00:02:05.308 DPDK 24.03.0 00:02:05.308 00:02:05.308 User defined options 00:02:05.308 buildtype : debug 00:02:05.308 default_library : shared 00:02:05.308 libdir : lib 00:02:05.308 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:05.308 b_sanitize : address 00:02:05.308 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:05.308 c_link_args : 00:02:05.308 cpu_instruction_set: native 00:02:05.308 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:05.308 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:05.308 enable_docs : false 00:02:05.308 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:05.308 enable_kmods : false 00:02:05.308 max_lcores : 128 00:02:05.308 tests : false 00:02:05.308 00:02:05.308 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:05.308 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:05.308 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:05.308 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:05.308 [3/268] Linking static target lib/librte_kvargs.a 00:02:05.308 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:05.308 [5/268] Linking static target lib/librte_log.a 00:02:05.308 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:05.568 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:05.568 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:05.568 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:05.568 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.568 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:05.829 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:05.829 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:05.829 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:05.829 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:05.829 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:05.829 [17/268] Linking static target lib/librte_telemetry.a 00:02:05.829 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:06.089 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.089 [20/268] Linking target lib/librte_log.so.24.1 00:02:06.348 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:06.348 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:06.348 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:06.349 [24/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:06.349 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:06.349 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:06.349 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:06.349 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:06.349 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:06.349 [30/268] Linking target lib/librte_kvargs.so.24.1 00:02:06.349 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:06.608 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:06.608 [33/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:06.608 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.608 [35/268] Linking target lib/librte_telemetry.so.24.1 00:02:06.868 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:06.868 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:06.868 [38/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:06.868 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:06.868 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:06.868 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:06.868 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:06.868 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:06.868 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:06.868 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:07.128 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:07.128 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:07.128 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:07.128 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:07.388 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:07.388 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:07.388 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:07.388 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:07.647 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:07.647 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:07.647 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:07.647 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:07.647 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:07.647 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:07.907 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:07.907 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:07.907 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:07.907 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:07.907 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:08.167 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:08.167 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:08.167 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:08.428 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:08.428 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:08.428 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:08.428 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:08.428 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:08.428 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:08.428 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:08.428 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:08.687 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:08.687 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:08.687 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:08.687 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:08.687 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:08.687 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:08.947 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:08.947 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:09.206 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:09.206 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:09.206 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:09.206 [87/268] Linking static target lib/librte_ring.a 00:02:09.206 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:09.206 [89/268] Linking static target lib/librte_eal.a 00:02:09.206 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:09.465 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:09.465 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:09.465 [93/268] Linking static target lib/librte_mempool.a 00:02:09.725 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:09.725 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:09.725 [96/268] Linking static target lib/librte_rcu.a 00:02:09.725 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.984 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:09.984 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:09.984 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:09.984 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:09.984 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.984 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:10.244 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:10.244 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:10.244 [106/268] Linking static target lib/librte_mbuf.a 00:02:10.244 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:10.244 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:10.244 [109/268] Linking static target lib/librte_net.a 00:02:10.503 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:10.503 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:10.503 [112/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:10.503 [113/268] Linking static target lib/librte_meter.a 00:02:10.503 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.762 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:10.762 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.762 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:10.762 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.020 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:11.020 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:11.279 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:11.279 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.538 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:11.538 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:11.538 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:11.538 [126/268] Linking static target lib/librte_pci.a 00:02:11.538 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:11.538 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:11.797 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:11.797 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:11.797 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:11.797 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:11.797 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:11.797 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:12.056 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:12.056 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.056 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:12.056 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:12.056 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:12.056 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:12.056 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:12.056 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:12.056 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:12.056 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:12.056 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:12.056 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:12.056 [147/268] Linking static target lib/librte_cmdline.a 00:02:12.316 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:12.316 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:12.576 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:12.576 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:12.576 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:12.576 [153/268] Linking static target lib/librte_timer.a 00:02:12.576 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:12.576 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:12.834 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:12.834 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:12.834 [158/268] Linking static target lib/librte_ethdev.a 00:02:12.834 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:12.834 [160/268] Linking static target lib/librte_compressdev.a 00:02:12.834 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:13.092 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:13.092 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:13.092 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.092 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:13.092 [166/268] Linking static target lib/librte_dmadev.a 00:02:13.350 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:13.350 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:13.350 [169/268] Linking static target lib/librte_hash.a 00:02:13.608 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:13.608 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:13.608 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:13.608 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.867 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:13.867 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.867 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:14.126 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:14.126 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:14.126 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:14.126 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.126 [181/268] Linking static target lib/librte_cryptodev.a 00:02:14.126 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:14.126 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:14.384 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:14.384 [185/268] Linking static target lib/librte_power.a 00:02:14.643 [186/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.643 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:14.643 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:14.643 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:14.643 [190/268] Linking static target lib/librte_reorder.a 00:02:14.643 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:14.643 [192/268] Linking static target lib/librte_security.a 00:02:14.643 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:15.210 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.470 [195/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.470 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:15.470 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:15.470 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.729 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:15.729 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:15.988 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:15.989 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:15.989 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:15.989 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:16.249 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:16.249 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:16.249 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.508 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:16.508 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:16.509 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:16.509 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:16.768 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:16.768 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:16.768 [214/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:16.768 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:16.769 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:16.769 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:16.769 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:16.769 [219/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:16.769 [220/268] Linking static target drivers/librte_bus_vdev.a 00:02:16.769 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:17.028 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:17.028 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:17.028 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:17.028 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:17.028 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.289 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.548 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:20.092 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.092 [230/268] Linking target lib/librte_eal.so.24.1 00:02:20.092 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:20.092 [232/268] Linking target lib/librte_meter.so.24.1 00:02:20.092 [233/268] Linking target lib/librte_ring.so.24.1 00:02:20.092 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:20.352 [235/268] Linking target lib/librte_pci.so.24.1 00:02:20.352 [236/268] Linking target lib/librte_timer.so.24.1 00:02:20.352 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:20.352 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:20.352 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:20.352 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:20.352 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:20.352 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:20.352 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:20.352 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:20.352 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:20.612 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:20.612 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:20.612 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:20.612 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:20.612 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:20.872 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:20.872 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:02:20.872 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:20.872 [254/268] Linking target lib/librte_net.so.24.1 00:02:20.872 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:20.872 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:20.872 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:20.872 [258/268] Linking target lib/librte_security.so.24.1 00:02:20.872 [259/268] Linking target lib/librte_hash.so.24.1 00:02:21.132 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:21.703 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.703 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:21.964 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:21.964 [264/268] Linking target lib/librte_power.so.24.1 00:02:22.224 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:22.224 [266/268] Linking static target lib/librte_vhost.a 00:02:24.765 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.024 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:25.024 INFO: autodetecting backend as ninja 00:02:25.024 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:43.123 CC lib/ut_mock/mock.o 00:02:43.123 CC lib/ut/ut.o 00:02:43.123 CC lib/log/log_deprecated.o 00:02:43.123 CC lib/log/log.o 00:02:43.123 CC lib/log/log_flags.o 00:02:43.123 LIB libspdk_ut_mock.a 00:02:43.123 LIB libspdk_ut.a 00:02:43.123 SO libspdk_ut_mock.so.6.0 00:02:43.123 LIB libspdk_log.a 00:02:43.123 SO libspdk_ut.so.2.0 00:02:43.123 SO libspdk_log.so.7.1 00:02:43.123 SYMLINK libspdk_ut_mock.so 00:02:43.123 SYMLINK libspdk_ut.so 00:02:43.123 SYMLINK libspdk_log.so 00:02:43.382 CXX lib/trace_parser/trace.o 00:02:43.382 CC lib/util/base64.o 00:02:43.382 CC lib/util/bit_array.o 00:02:43.382 CC lib/dma/dma.o 00:02:43.382 CC lib/util/cpuset.o 00:02:43.644 CC lib/util/crc16.o 00:02:43.644 CC lib/util/crc32c.o 00:02:43.644 CC lib/util/crc32.o 00:02:43.644 CC lib/ioat/ioat.o 00:02:43.644 CC lib/vfio_user/host/vfio_user_pci.o 00:02:43.644 CC lib/util/crc32_ieee.o 00:02:43.644 CC lib/util/crc64.o 00:02:43.644 CC lib/util/dif.o 00:02:43.644 CC lib/util/fd.o 00:02:43.644 CC lib/util/fd_group.o 00:02:43.644 CC lib/util/file.o 00:02:43.644 LIB libspdk_dma.a 00:02:43.902 CC lib/util/hexlify.o 00:02:43.902 SO libspdk_dma.so.5.0 00:02:43.902 CC lib/vfio_user/host/vfio_user.o 00:02:43.902 LIB libspdk_ioat.a 00:02:43.902 SO libspdk_ioat.so.7.0 00:02:43.902 SYMLINK libspdk_dma.so 00:02:43.902 CC lib/util/iov.o 00:02:43.902 CC lib/util/math.o 00:02:43.902 SYMLINK libspdk_ioat.so 00:02:43.902 CC lib/util/net.o 00:02:43.902 CC lib/util/pipe.o 00:02:43.902 CC lib/util/strerror_tls.o 00:02:43.902 CC lib/util/string.o 00:02:43.902 CC lib/util/uuid.o 00:02:43.902 LIB libspdk_vfio_user.a 00:02:44.160 CC lib/util/xor.o 00:02:44.160 CC lib/util/zipf.o 00:02:44.160 CC lib/util/md5.o 00:02:44.160 SO libspdk_vfio_user.so.5.0 00:02:44.160 SYMLINK libspdk_vfio_user.so 00:02:44.417 LIB libspdk_util.a 00:02:44.417 LIB libspdk_trace_parser.a 00:02:44.675 SO libspdk_util.so.10.1 00:02:44.675 SO libspdk_trace_parser.so.6.0 00:02:44.675 SYMLINK libspdk_util.so 00:02:44.675 SYMLINK libspdk_trace_parser.so 00:02:44.934 CC lib/json/json_parse.o 00:02:44.934 CC lib/json/json_util.o 00:02:44.934 CC lib/json/json_write.o 00:02:44.934 CC lib/vmd/vmd.o 00:02:44.934 CC lib/vmd/led.o 00:02:44.934 CC lib/rdma_utils/rdma_utils.o 00:02:44.934 CC lib/idxd/idxd.o 00:02:44.934 CC lib/idxd/idxd_user.o 00:02:44.934 CC lib/env_dpdk/env.o 00:02:44.934 CC lib/conf/conf.o 00:02:45.194 CC lib/env_dpdk/memory.o 00:02:45.194 CC lib/env_dpdk/pci.o 00:02:45.194 LIB libspdk_conf.a 00:02:45.194 CC lib/idxd/idxd_kernel.o 00:02:45.194 LIB libspdk_rdma_utils.a 00:02:45.194 SO libspdk_conf.so.6.0 00:02:45.194 SO libspdk_rdma_utils.so.1.0 00:02:45.194 CC lib/env_dpdk/init.o 00:02:45.194 SYMLINK libspdk_conf.so 00:02:45.194 CC lib/env_dpdk/threads.o 00:02:45.194 LIB libspdk_json.a 00:02:45.194 SYMLINK libspdk_rdma_utils.so 00:02:45.194 CC lib/env_dpdk/pci_ioat.o 00:02:45.453 SO libspdk_json.so.6.0 00:02:45.453 CC lib/env_dpdk/pci_virtio.o 00:02:45.453 SYMLINK libspdk_json.so 00:02:45.453 CC lib/env_dpdk/pci_vmd.o 00:02:45.453 CC lib/env_dpdk/pci_idxd.o 00:02:45.453 CC lib/env_dpdk/pci_event.o 00:02:45.453 CC lib/env_dpdk/sigbus_handler.o 00:02:45.453 CC lib/env_dpdk/pci_dpdk.o 00:02:45.726 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:45.726 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:45.726 LIB libspdk_idxd.a 00:02:45.726 LIB libspdk_vmd.a 00:02:45.726 SO libspdk_idxd.so.12.1 00:02:45.726 SO libspdk_vmd.so.6.0 00:02:45.726 CC lib/rdma_provider/common.o 00:02:45.726 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:45.726 SYMLINK libspdk_idxd.so 00:02:45.726 CC lib/jsonrpc/jsonrpc_server.o 00:02:45.726 CC lib/jsonrpc/jsonrpc_client.o 00:02:45.726 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:45.726 SYMLINK libspdk_vmd.so 00:02:45.726 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:45.984 LIB libspdk_rdma_provider.a 00:02:45.984 SO libspdk_rdma_provider.so.7.0 00:02:46.243 LIB libspdk_jsonrpc.a 00:02:46.243 SYMLINK libspdk_rdma_provider.so 00:02:46.243 SO libspdk_jsonrpc.so.6.0 00:02:46.243 SYMLINK libspdk_jsonrpc.so 00:02:46.810 CC lib/rpc/rpc.o 00:02:46.810 LIB libspdk_env_dpdk.a 00:02:46.810 SO libspdk_env_dpdk.so.15.1 00:02:47.068 LIB libspdk_rpc.a 00:02:47.068 SO libspdk_rpc.so.6.0 00:02:47.068 SYMLINK libspdk_env_dpdk.so 00:02:47.068 SYMLINK libspdk_rpc.so 00:02:47.327 CC lib/trace/trace.o 00:02:47.327 CC lib/trace/trace_rpc.o 00:02:47.327 CC lib/trace/trace_flags.o 00:02:47.327 CC lib/keyring/keyring_rpc.o 00:02:47.327 CC lib/keyring/keyring.o 00:02:47.586 CC lib/notify/notify.o 00:02:47.586 CC lib/notify/notify_rpc.o 00:02:47.586 LIB libspdk_notify.a 00:02:47.586 SO libspdk_notify.so.6.0 00:02:47.846 LIB libspdk_trace.a 00:02:47.846 SYMLINK libspdk_notify.so 00:02:47.846 LIB libspdk_keyring.a 00:02:47.846 SO libspdk_trace.so.11.0 00:02:47.846 SO libspdk_keyring.so.2.0 00:02:47.846 SYMLINK libspdk_trace.so 00:02:47.846 SYMLINK libspdk_keyring.so 00:02:48.414 CC lib/thread/thread.o 00:02:48.414 CC lib/thread/iobuf.o 00:02:48.414 CC lib/sock/sock.o 00:02:48.414 CC lib/sock/sock_rpc.o 00:02:48.672 LIB libspdk_sock.a 00:02:48.931 SO libspdk_sock.so.10.0 00:02:48.932 SYMLINK libspdk_sock.so 00:02:49.190 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:49.190 CC lib/nvme/nvme_ctrlr.o 00:02:49.190 CC lib/nvme/nvme_fabric.o 00:02:49.190 CC lib/nvme/nvme_ns_cmd.o 00:02:49.190 CC lib/nvme/nvme_ns.o 00:02:49.190 CC lib/nvme/nvme_pcie_common.o 00:02:49.190 CC lib/nvme/nvme_pcie.o 00:02:49.190 CC lib/nvme/nvme.o 00:02:49.190 CC lib/nvme/nvme_qpair.o 00:02:50.126 CC lib/nvme/nvme_quirks.o 00:02:50.126 LIB libspdk_thread.a 00:02:50.126 CC lib/nvme/nvme_transport.o 00:02:50.126 SO libspdk_thread.so.11.0 00:02:50.126 CC lib/nvme/nvme_discovery.o 00:02:50.126 SYMLINK libspdk_thread.so 00:02:50.126 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:50.126 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:50.126 CC lib/nvme/nvme_tcp.o 00:02:50.384 CC lib/nvme/nvme_opal.o 00:02:50.384 CC lib/nvme/nvme_io_msg.o 00:02:50.644 CC lib/nvme/nvme_poll_group.o 00:02:50.644 CC lib/nvme/nvme_zns.o 00:02:50.644 CC lib/nvme/nvme_stubs.o 00:02:50.903 CC lib/nvme/nvme_auth.o 00:02:50.903 CC lib/accel/accel.o 00:02:50.903 CC lib/accel/accel_rpc.o 00:02:50.903 CC lib/blob/blobstore.o 00:02:51.163 CC lib/init/json_config.o 00:02:51.163 CC lib/init/subsystem.o 00:02:51.163 CC lib/init/subsystem_rpc.o 00:02:51.163 CC lib/blob/request.o 00:02:51.163 CC lib/blob/zeroes.o 00:02:51.422 CC lib/blob/blob_bs_dev.o 00:02:51.422 CC lib/accel/accel_sw.o 00:02:51.422 CC lib/init/rpc.o 00:02:51.681 LIB libspdk_init.a 00:02:51.681 CC lib/virtio/virtio.o 00:02:51.681 SO libspdk_init.so.6.0 00:02:51.681 CC lib/virtio/virtio_vhost_user.o 00:02:51.681 SYMLINK libspdk_init.so 00:02:51.681 CC lib/nvme/nvme_cuse.o 00:02:51.681 CC lib/nvme/nvme_rdma.o 00:02:51.681 CC lib/fsdev/fsdev.o 00:02:51.941 CC lib/fsdev/fsdev_io.o 00:02:51.941 CC lib/fsdev/fsdev_rpc.o 00:02:51.941 CC lib/virtio/virtio_vfio_user.o 00:02:51.941 CC lib/virtio/virtio_pci.o 00:02:52.200 CC lib/event/app.o 00:02:52.200 CC lib/event/log_rpc.o 00:02:52.200 CC lib/event/reactor.o 00:02:52.200 LIB libspdk_virtio.a 00:02:52.200 CC lib/event/app_rpc.o 00:02:52.460 LIB libspdk_accel.a 00:02:52.460 SO libspdk_virtio.so.7.0 00:02:52.460 SO libspdk_accel.so.16.0 00:02:52.460 CC lib/event/scheduler_static.o 00:02:52.460 SYMLINK libspdk_virtio.so 00:02:52.460 LIB libspdk_fsdev.a 00:02:52.460 SYMLINK libspdk_accel.so 00:02:52.460 SO libspdk_fsdev.so.2.0 00:02:52.460 SYMLINK libspdk_fsdev.so 00:02:52.720 CC lib/bdev/bdev.o 00:02:52.720 CC lib/bdev/bdev_rpc.o 00:02:52.720 CC lib/bdev/bdev_zone.o 00:02:52.720 CC lib/bdev/scsi_nvme.o 00:02:52.720 CC lib/bdev/part.o 00:02:52.720 LIB libspdk_event.a 00:02:52.720 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:52.981 SO libspdk_event.so.14.0 00:02:52.981 SYMLINK libspdk_event.so 00:02:53.293 LIB libspdk_nvme.a 00:02:53.560 SO libspdk_nvme.so.15.0 00:02:53.560 LIB libspdk_fuse_dispatcher.a 00:02:53.560 SO libspdk_fuse_dispatcher.so.1.0 00:02:53.560 SYMLINK libspdk_fuse_dispatcher.so 00:02:53.820 SYMLINK libspdk_nvme.so 00:02:54.760 LIB libspdk_blob.a 00:02:54.760 SO libspdk_blob.so.12.0 00:02:55.020 SYMLINK libspdk_blob.so 00:02:55.281 CC lib/lvol/lvol.o 00:02:55.281 CC lib/blobfs/blobfs.o 00:02:55.281 CC lib/blobfs/tree.o 00:02:56.221 LIB libspdk_bdev.a 00:02:56.221 LIB libspdk_blobfs.a 00:02:56.221 SO libspdk_bdev.so.17.0 00:02:56.221 SO libspdk_blobfs.so.11.0 00:02:56.481 LIB libspdk_lvol.a 00:02:56.482 SYMLINK libspdk_blobfs.so 00:02:56.482 SO libspdk_lvol.so.11.0 00:02:56.482 SYMLINK libspdk_bdev.so 00:02:56.482 SYMLINK libspdk_lvol.so 00:02:56.741 CC lib/ublk/ublk.o 00:02:56.741 CC lib/ublk/ublk_rpc.o 00:02:56.741 CC lib/ftl/ftl_core.o 00:02:56.741 CC lib/ftl/ftl_init.o 00:02:56.741 CC lib/ftl/ftl_layout.o 00:02:56.741 CC lib/ftl/ftl_debug.o 00:02:56.741 CC lib/ftl/ftl_io.o 00:02:56.741 CC lib/nbd/nbd.o 00:02:56.741 CC lib/nvmf/ctrlr.o 00:02:56.741 CC lib/scsi/dev.o 00:02:57.001 CC lib/nvmf/ctrlr_discovery.o 00:02:57.001 CC lib/ftl/ftl_sb.o 00:02:57.001 CC lib/ftl/ftl_l2p.o 00:02:57.001 CC lib/scsi/lun.o 00:02:57.001 CC lib/ftl/ftl_l2p_flat.o 00:02:57.261 CC lib/ftl/ftl_nv_cache.o 00:02:57.261 CC lib/ftl/ftl_band.o 00:02:57.261 CC lib/nvmf/ctrlr_bdev.o 00:02:57.261 CC lib/nbd/nbd_rpc.o 00:02:57.261 CC lib/scsi/port.o 00:02:57.261 CC lib/nvmf/subsystem.o 00:02:57.520 CC lib/scsi/scsi.o 00:02:57.520 LIB libspdk_nbd.a 00:02:57.520 CC lib/scsi/scsi_bdev.o 00:02:57.520 SO libspdk_nbd.so.7.0 00:02:57.520 CC lib/nvmf/nvmf.o 00:02:57.520 LIB libspdk_ublk.a 00:02:57.520 SO libspdk_ublk.so.3.0 00:02:57.520 SYMLINK libspdk_nbd.so 00:02:57.520 CC lib/nvmf/nvmf_rpc.o 00:02:57.520 CC lib/nvmf/transport.o 00:02:57.520 SYMLINK libspdk_ublk.so 00:02:57.520 CC lib/ftl/ftl_band_ops.o 00:02:57.520 CC lib/ftl/ftl_writer.o 00:02:57.778 CC lib/scsi/scsi_pr.o 00:02:58.037 CC lib/scsi/scsi_rpc.o 00:02:58.037 CC lib/ftl/ftl_rq.o 00:02:58.037 CC lib/nvmf/tcp.o 00:02:58.037 CC lib/scsi/task.o 00:02:58.296 CC lib/nvmf/stubs.o 00:02:58.296 CC lib/ftl/ftl_reloc.o 00:02:58.296 CC lib/ftl/ftl_l2p_cache.o 00:02:58.296 LIB libspdk_scsi.a 00:02:58.296 CC lib/nvmf/mdns_server.o 00:02:58.296 SO libspdk_scsi.so.9.0 00:02:58.556 CC lib/nvmf/rdma.o 00:02:58.556 CC lib/nvmf/auth.o 00:02:58.556 SYMLINK libspdk_scsi.so 00:02:58.556 CC lib/ftl/ftl_p2l.o 00:02:58.816 CC lib/ftl/ftl_p2l_log.o 00:02:58.816 CC lib/iscsi/conn.o 00:02:58.816 CC lib/vhost/vhost.o 00:02:58.816 CC lib/vhost/vhost_rpc.o 00:02:58.816 CC lib/vhost/vhost_scsi.o 00:02:59.075 CC lib/iscsi/init_grp.o 00:02:59.075 CC lib/vhost/vhost_blk.o 00:02:59.075 CC lib/ftl/mngt/ftl_mngt.o 00:02:59.335 CC lib/iscsi/iscsi.o 00:02:59.335 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:59.335 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:59.335 CC lib/vhost/rte_vhost_user.o 00:02:59.595 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:59.595 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:59.595 CC lib/iscsi/param.o 00:02:59.855 CC lib/iscsi/portal_grp.o 00:02:59.855 CC lib/iscsi/tgt_node.o 00:02:59.855 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:59.855 CC lib/iscsi/iscsi_subsystem.o 00:03:00.115 CC lib/iscsi/iscsi_rpc.o 00:03:00.115 CC lib/iscsi/task.o 00:03:00.115 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:00.115 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:00.115 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:00.115 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:00.375 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:00.375 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:00.375 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:00.375 CC lib/ftl/utils/ftl_conf.o 00:03:00.635 CC lib/ftl/utils/ftl_md.o 00:03:00.635 LIB libspdk_vhost.a 00:03:00.635 CC lib/ftl/utils/ftl_mempool.o 00:03:00.635 CC lib/ftl/utils/ftl_bitmap.o 00:03:00.635 CC lib/ftl/utils/ftl_property.o 00:03:00.635 SO libspdk_vhost.so.8.0 00:03:00.635 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:00.635 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:00.635 SYMLINK libspdk_vhost.so 00:03:00.635 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:00.635 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:00.635 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:00.894 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:00.895 LIB libspdk_iscsi.a 00:03:00.895 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:00.895 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:00.895 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:00.895 SO libspdk_iscsi.so.8.0 00:03:00.895 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:00.895 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:00.895 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:00.895 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:00.895 LIB libspdk_nvmf.a 00:03:00.895 CC lib/ftl/base/ftl_base_dev.o 00:03:01.154 CC lib/ftl/base/ftl_base_bdev.o 00:03:01.154 SYMLINK libspdk_iscsi.so 00:03:01.154 CC lib/ftl/ftl_trace.o 00:03:01.154 SO libspdk_nvmf.so.20.0 00:03:01.423 SYMLINK libspdk_nvmf.so 00:03:01.423 LIB libspdk_ftl.a 00:03:01.707 SO libspdk_ftl.so.9.0 00:03:01.967 SYMLINK libspdk_ftl.so 00:03:02.536 CC module/env_dpdk/env_dpdk_rpc.o 00:03:02.536 CC module/sock/posix/posix.o 00:03:02.536 CC module/blob/bdev/blob_bdev.o 00:03:02.536 CC module/accel/iaa/accel_iaa.o 00:03:02.536 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:02.536 CC module/accel/error/accel_error.o 00:03:02.536 CC module/accel/dsa/accel_dsa.o 00:03:02.536 CC module/accel/ioat/accel_ioat.o 00:03:02.536 CC module/keyring/file/keyring.o 00:03:02.536 CC module/fsdev/aio/fsdev_aio.o 00:03:02.536 LIB libspdk_env_dpdk_rpc.a 00:03:02.536 SO libspdk_env_dpdk_rpc.so.6.0 00:03:02.536 SYMLINK libspdk_env_dpdk_rpc.so 00:03:02.536 CC module/accel/ioat/accel_ioat_rpc.o 00:03:02.536 CC module/keyring/file/keyring_rpc.o 00:03:02.536 LIB libspdk_scheduler_dynamic.a 00:03:02.536 CC module/accel/iaa/accel_iaa_rpc.o 00:03:02.536 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:02.796 CC module/accel/error/accel_error_rpc.o 00:03:02.796 SO libspdk_scheduler_dynamic.so.4.0 00:03:02.796 LIB libspdk_accel_ioat.a 00:03:02.796 SO libspdk_accel_ioat.so.6.0 00:03:02.796 SYMLINK libspdk_scheduler_dynamic.so 00:03:02.796 LIB libspdk_keyring_file.a 00:03:02.796 LIB libspdk_blob_bdev.a 00:03:02.796 CC module/accel/dsa/accel_dsa_rpc.o 00:03:02.796 SO libspdk_blob_bdev.so.12.0 00:03:02.796 SO libspdk_keyring_file.so.2.0 00:03:02.796 LIB libspdk_accel_iaa.a 00:03:02.796 SYMLINK libspdk_accel_ioat.so 00:03:02.796 CC module/fsdev/aio/linux_aio_mgr.o 00:03:02.796 SO libspdk_accel_iaa.so.3.0 00:03:02.796 LIB libspdk_accel_error.a 00:03:02.796 SO libspdk_accel_error.so.2.0 00:03:02.796 SYMLINK libspdk_blob_bdev.so 00:03:02.796 SYMLINK libspdk_keyring_file.so 00:03:02.796 SYMLINK libspdk_accel_iaa.so 00:03:02.796 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:02.796 SYMLINK libspdk_accel_error.so 00:03:02.796 LIB libspdk_accel_dsa.a 00:03:03.055 SO libspdk_accel_dsa.so.5.0 00:03:03.055 CC module/scheduler/gscheduler/gscheduler.o 00:03:03.055 SYMLINK libspdk_accel_dsa.so 00:03:03.055 CC module/keyring/linux/keyring.o 00:03:03.055 LIB libspdk_scheduler_dpdk_governor.a 00:03:03.055 LIB libspdk_scheduler_gscheduler.a 00:03:03.055 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:03.055 SO libspdk_scheduler_gscheduler.so.4.0 00:03:03.055 CC module/blobfs/bdev/blobfs_bdev.o 00:03:03.314 CC module/bdev/error/vbdev_error.o 00:03:03.314 CC module/bdev/delay/vbdev_delay.o 00:03:03.314 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:03.314 CC module/bdev/lvol/vbdev_lvol.o 00:03:03.314 CC module/keyring/linux/keyring_rpc.o 00:03:03.314 CC module/bdev/gpt/gpt.o 00:03:03.314 SYMLINK libspdk_scheduler_gscheduler.so 00:03:03.314 LIB libspdk_sock_posix.a 00:03:03.314 LIB libspdk_fsdev_aio.a 00:03:03.314 LIB libspdk_keyring_linux.a 00:03:03.314 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:03.314 SO libspdk_sock_posix.so.6.0 00:03:03.314 CC module/bdev/malloc/bdev_malloc.o 00:03:03.314 SO libspdk_fsdev_aio.so.1.0 00:03:03.314 SO libspdk_keyring_linux.so.1.0 00:03:03.314 CC module/bdev/null/bdev_null.o 00:03:03.574 CC module/bdev/gpt/vbdev_gpt.o 00:03:03.574 CC module/bdev/error/vbdev_error_rpc.o 00:03:03.574 SYMLINK libspdk_keyring_linux.so 00:03:03.574 SYMLINK libspdk_sock_posix.so 00:03:03.574 SYMLINK libspdk_fsdev_aio.so 00:03:03.574 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:03.574 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:03.574 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:03.574 LIB libspdk_blobfs_bdev.a 00:03:03.574 SO libspdk_blobfs_bdev.so.6.0 00:03:03.574 LIB libspdk_bdev_error.a 00:03:03.574 CC module/bdev/null/bdev_null_rpc.o 00:03:03.574 SYMLINK libspdk_blobfs_bdev.so 00:03:03.574 LIB libspdk_bdev_delay.a 00:03:03.574 SO libspdk_bdev_error.so.6.0 00:03:03.574 SO libspdk_bdev_delay.so.6.0 00:03:03.833 LIB libspdk_bdev_gpt.a 00:03:03.833 SYMLINK libspdk_bdev_delay.so 00:03:03.833 SYMLINK libspdk_bdev_error.so 00:03:03.833 CC module/bdev/nvme/bdev_nvme.o 00:03:03.833 SO libspdk_bdev_gpt.so.6.0 00:03:03.833 CC module/bdev/passthru/vbdev_passthru.o 00:03:03.833 SYMLINK libspdk_bdev_gpt.so 00:03:03.833 LIB libspdk_bdev_null.a 00:03:03.833 LIB libspdk_bdev_malloc.a 00:03:03.833 SO libspdk_bdev_malloc.so.6.0 00:03:03.833 SO libspdk_bdev_null.so.6.0 00:03:03.833 CC module/bdev/raid/bdev_raid.o 00:03:03.833 CC module/bdev/split/vbdev_split.o 00:03:03.833 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:03.833 LIB libspdk_bdev_lvol.a 00:03:03.833 CC module/bdev/aio/bdev_aio.o 00:03:03.833 SYMLINK libspdk_bdev_null.so 00:03:03.833 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:03.833 SYMLINK libspdk_bdev_malloc.so 00:03:03.833 SO libspdk_bdev_lvol.so.6.0 00:03:04.093 CC module/bdev/ftl/bdev_ftl.o 00:03:04.093 SYMLINK libspdk_bdev_lvol.so 00:03:04.093 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:04.093 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:04.093 CC module/bdev/iscsi/bdev_iscsi.o 00:03:04.093 CC module/bdev/split/vbdev_split_rpc.o 00:03:04.093 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:04.093 LIB libspdk_bdev_passthru.a 00:03:04.093 SO libspdk_bdev_passthru.so.6.0 00:03:04.353 CC module/bdev/raid/bdev_raid_rpc.o 00:03:04.353 SYMLINK libspdk_bdev_passthru.so 00:03:04.353 CC module/bdev/raid/bdev_raid_sb.o 00:03:04.353 CC module/bdev/aio/bdev_aio_rpc.o 00:03:04.353 LIB libspdk_bdev_zone_block.a 00:03:04.353 LIB libspdk_bdev_ftl.a 00:03:04.353 SO libspdk_bdev_zone_block.so.6.0 00:03:04.353 LIB libspdk_bdev_split.a 00:03:04.353 SO libspdk_bdev_ftl.so.6.0 00:03:04.353 SO libspdk_bdev_split.so.6.0 00:03:04.353 SYMLINK libspdk_bdev_zone_block.so 00:03:04.353 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:04.353 SYMLINK libspdk_bdev_ftl.so 00:03:04.353 CC module/bdev/nvme/nvme_rpc.o 00:03:04.353 LIB libspdk_bdev_aio.a 00:03:04.612 SYMLINK libspdk_bdev_split.so 00:03:04.612 CC module/bdev/raid/raid0.o 00:03:04.612 SO libspdk_bdev_aio.so.6.0 00:03:04.612 CC module/bdev/nvme/bdev_mdns_client.o 00:03:04.612 LIB libspdk_bdev_iscsi.a 00:03:04.612 CC module/bdev/nvme/vbdev_opal.o 00:03:04.612 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:04.612 SO libspdk_bdev_iscsi.so.6.0 00:03:04.612 SYMLINK libspdk_bdev_aio.so 00:03:04.612 CC module/bdev/raid/raid1.o 00:03:04.612 SYMLINK libspdk_bdev_iscsi.so 00:03:04.612 CC module/bdev/raid/concat.o 00:03:04.612 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:04.871 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:04.871 CC module/bdev/raid/raid5f.o 00:03:04.871 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:04.871 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:05.130 LIB libspdk_bdev_virtio.a 00:03:05.389 SO libspdk_bdev_virtio.so.6.0 00:03:05.389 SYMLINK libspdk_bdev_virtio.so 00:03:05.389 LIB libspdk_bdev_raid.a 00:03:05.648 SO libspdk_bdev_raid.so.6.0 00:03:05.648 SYMLINK libspdk_bdev_raid.so 00:03:07.029 LIB libspdk_bdev_nvme.a 00:03:07.029 SO libspdk_bdev_nvme.so.7.1 00:03:07.029 SYMLINK libspdk_bdev_nvme.so 00:03:07.600 CC module/event/subsystems/iobuf/iobuf.o 00:03:07.600 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:07.600 CC module/event/subsystems/scheduler/scheduler.o 00:03:07.600 CC module/event/subsystems/vmd/vmd.o 00:03:07.600 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:07.600 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:07.600 CC module/event/subsystems/sock/sock.o 00:03:07.600 CC module/event/subsystems/fsdev/fsdev.o 00:03:07.600 CC module/event/subsystems/keyring/keyring.o 00:03:07.859 LIB libspdk_event_keyring.a 00:03:07.859 LIB libspdk_event_scheduler.a 00:03:07.859 LIB libspdk_event_fsdev.a 00:03:07.859 LIB libspdk_event_sock.a 00:03:07.859 LIB libspdk_event_vmd.a 00:03:07.859 LIB libspdk_event_vhost_blk.a 00:03:07.859 LIB libspdk_event_iobuf.a 00:03:07.859 SO libspdk_event_keyring.so.1.0 00:03:07.859 SO libspdk_event_scheduler.so.4.0 00:03:07.859 SO libspdk_event_sock.so.5.0 00:03:07.859 SO libspdk_event_fsdev.so.1.0 00:03:07.859 SO libspdk_event_vmd.so.6.0 00:03:07.859 SO libspdk_event_vhost_blk.so.3.0 00:03:07.859 SO libspdk_event_iobuf.so.3.0 00:03:07.859 SYMLINK libspdk_event_keyring.so 00:03:07.859 SYMLINK libspdk_event_scheduler.so 00:03:07.859 SYMLINK libspdk_event_fsdev.so 00:03:07.859 SYMLINK libspdk_event_sock.so 00:03:07.859 SYMLINK libspdk_event_vhost_blk.so 00:03:07.859 SYMLINK libspdk_event_vmd.so 00:03:07.859 SYMLINK libspdk_event_iobuf.so 00:03:08.428 CC module/event/subsystems/accel/accel.o 00:03:08.428 LIB libspdk_event_accel.a 00:03:08.428 SO libspdk_event_accel.so.6.0 00:03:08.717 SYMLINK libspdk_event_accel.so 00:03:08.977 CC module/event/subsystems/bdev/bdev.o 00:03:09.237 LIB libspdk_event_bdev.a 00:03:09.237 SO libspdk_event_bdev.so.6.0 00:03:09.237 SYMLINK libspdk_event_bdev.so 00:03:09.498 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:09.498 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:09.498 CC module/event/subsystems/nbd/nbd.o 00:03:09.498 CC module/event/subsystems/scsi/scsi.o 00:03:09.498 CC module/event/subsystems/ublk/ublk.o 00:03:09.758 LIB libspdk_event_nbd.a 00:03:09.758 LIB libspdk_event_ublk.a 00:03:09.758 LIB libspdk_event_scsi.a 00:03:09.758 SO libspdk_event_nbd.so.6.0 00:03:09.758 SO libspdk_event_ublk.so.3.0 00:03:09.758 LIB libspdk_event_nvmf.a 00:03:09.758 SO libspdk_event_scsi.so.6.0 00:03:09.758 SO libspdk_event_nvmf.so.6.0 00:03:09.758 SYMLINK libspdk_event_nbd.so 00:03:09.758 SYMLINK libspdk_event_ublk.so 00:03:10.018 SYMLINK libspdk_event_scsi.so 00:03:10.018 SYMLINK libspdk_event_nvmf.so 00:03:10.277 CC module/event/subsystems/iscsi/iscsi.o 00:03:10.277 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:10.277 LIB libspdk_event_iscsi.a 00:03:10.537 LIB libspdk_event_vhost_scsi.a 00:03:10.537 SO libspdk_event_iscsi.so.6.0 00:03:10.537 SO libspdk_event_vhost_scsi.so.3.0 00:03:10.537 SYMLINK libspdk_event_iscsi.so 00:03:10.537 SYMLINK libspdk_event_vhost_scsi.so 00:03:10.797 SO libspdk.so.6.0 00:03:10.797 SYMLINK libspdk.so 00:03:11.056 CXX app/trace/trace.o 00:03:11.056 CC app/spdk_lspci/spdk_lspci.o 00:03:11.056 CC app/spdk_nvme_perf/perf.o 00:03:11.056 CC app/trace_record/trace_record.o 00:03:11.056 CC app/spdk_nvme_identify/identify.o 00:03:11.056 CC app/nvmf_tgt/nvmf_main.o 00:03:11.056 CC app/iscsi_tgt/iscsi_tgt.o 00:03:11.056 CC app/spdk_tgt/spdk_tgt.o 00:03:11.056 CC test/thread/poller_perf/poller_perf.o 00:03:11.056 CC examples/util/zipf/zipf.o 00:03:11.315 LINK spdk_lspci 00:03:11.315 LINK nvmf_tgt 00:03:11.315 LINK poller_perf 00:03:11.315 LINK zipf 00:03:11.315 LINK iscsi_tgt 00:03:11.315 LINK spdk_trace_record 00:03:11.315 LINK spdk_tgt 00:03:11.574 CC app/spdk_nvme_discover/discovery_aer.o 00:03:11.574 LINK spdk_trace 00:03:11.574 CC app/spdk_top/spdk_top.o 00:03:11.574 CC app/spdk_dd/spdk_dd.o 00:03:11.574 LINK spdk_nvme_discover 00:03:11.834 CC examples/ioat/perf/perf.o 00:03:11.834 CC test/dma/test_dma/test_dma.o 00:03:11.834 TEST_HEADER include/spdk/accel.h 00:03:11.834 TEST_HEADER include/spdk/accel_module.h 00:03:11.834 TEST_HEADER include/spdk/assert.h 00:03:11.834 TEST_HEADER include/spdk/barrier.h 00:03:11.834 TEST_HEADER include/spdk/base64.h 00:03:11.834 TEST_HEADER include/spdk/bdev.h 00:03:11.834 CC app/fio/nvme/fio_plugin.o 00:03:11.834 TEST_HEADER include/spdk/bdev_module.h 00:03:11.834 TEST_HEADER include/spdk/bdev_zone.h 00:03:11.834 TEST_HEADER include/spdk/bit_array.h 00:03:11.834 TEST_HEADER include/spdk/bit_pool.h 00:03:11.834 TEST_HEADER include/spdk/blob_bdev.h 00:03:11.834 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:11.834 TEST_HEADER include/spdk/blobfs.h 00:03:11.834 TEST_HEADER include/spdk/blob.h 00:03:11.834 TEST_HEADER include/spdk/conf.h 00:03:11.834 TEST_HEADER include/spdk/config.h 00:03:11.834 TEST_HEADER include/spdk/cpuset.h 00:03:11.834 TEST_HEADER include/spdk/crc16.h 00:03:11.834 TEST_HEADER include/spdk/crc32.h 00:03:11.834 TEST_HEADER include/spdk/crc64.h 00:03:11.834 TEST_HEADER include/spdk/dif.h 00:03:11.834 TEST_HEADER include/spdk/dma.h 00:03:11.834 TEST_HEADER include/spdk/endian.h 00:03:11.834 TEST_HEADER include/spdk/env_dpdk.h 00:03:11.834 TEST_HEADER include/spdk/env.h 00:03:11.834 TEST_HEADER include/spdk/event.h 00:03:11.834 TEST_HEADER include/spdk/fd_group.h 00:03:11.834 TEST_HEADER include/spdk/fd.h 00:03:11.834 TEST_HEADER include/spdk/file.h 00:03:11.834 TEST_HEADER include/spdk/fsdev.h 00:03:11.834 TEST_HEADER include/spdk/fsdev_module.h 00:03:11.834 TEST_HEADER include/spdk/ftl.h 00:03:11.834 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:11.834 TEST_HEADER include/spdk/gpt_spec.h 00:03:11.834 TEST_HEADER include/spdk/hexlify.h 00:03:11.834 TEST_HEADER include/spdk/histogram_data.h 00:03:11.834 TEST_HEADER include/spdk/idxd.h 00:03:11.834 TEST_HEADER include/spdk/idxd_spec.h 00:03:11.834 TEST_HEADER include/spdk/init.h 00:03:11.834 TEST_HEADER include/spdk/ioat.h 00:03:11.834 TEST_HEADER include/spdk/ioat_spec.h 00:03:11.834 TEST_HEADER include/spdk/iscsi_spec.h 00:03:11.834 TEST_HEADER include/spdk/json.h 00:03:11.834 CC test/app/bdev_svc/bdev_svc.o 00:03:11.834 TEST_HEADER include/spdk/jsonrpc.h 00:03:11.834 TEST_HEADER include/spdk/keyring.h 00:03:11.834 TEST_HEADER include/spdk/keyring_module.h 00:03:11.834 TEST_HEADER include/spdk/likely.h 00:03:11.834 TEST_HEADER include/spdk/log.h 00:03:11.834 TEST_HEADER include/spdk/lvol.h 00:03:11.834 TEST_HEADER include/spdk/md5.h 00:03:11.834 TEST_HEADER include/spdk/memory.h 00:03:11.834 TEST_HEADER include/spdk/mmio.h 00:03:11.834 TEST_HEADER include/spdk/nbd.h 00:03:11.834 TEST_HEADER include/spdk/net.h 00:03:11.834 TEST_HEADER include/spdk/notify.h 00:03:11.834 TEST_HEADER include/spdk/nvme.h 00:03:11.834 TEST_HEADER include/spdk/nvme_intel.h 00:03:11.834 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:11.834 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:11.834 TEST_HEADER include/spdk/nvme_spec.h 00:03:11.834 TEST_HEADER include/spdk/nvme_zns.h 00:03:11.834 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:11.834 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:11.834 TEST_HEADER include/spdk/nvmf.h 00:03:11.834 TEST_HEADER include/spdk/nvmf_spec.h 00:03:11.834 TEST_HEADER include/spdk/nvmf_transport.h 00:03:11.834 TEST_HEADER include/spdk/opal.h 00:03:11.834 TEST_HEADER include/spdk/opal_spec.h 00:03:11.834 TEST_HEADER include/spdk/pci_ids.h 00:03:11.834 TEST_HEADER include/spdk/pipe.h 00:03:11.834 TEST_HEADER include/spdk/queue.h 00:03:11.834 TEST_HEADER include/spdk/reduce.h 00:03:11.834 TEST_HEADER include/spdk/rpc.h 00:03:11.834 TEST_HEADER include/spdk/scheduler.h 00:03:11.834 TEST_HEADER include/spdk/scsi.h 00:03:11.834 TEST_HEADER include/spdk/scsi_spec.h 00:03:11.834 TEST_HEADER include/spdk/sock.h 00:03:11.834 TEST_HEADER include/spdk/stdinc.h 00:03:11.834 TEST_HEADER include/spdk/string.h 00:03:11.834 TEST_HEADER include/spdk/thread.h 00:03:11.834 LINK ioat_perf 00:03:11.834 TEST_HEADER include/spdk/trace.h 00:03:11.834 TEST_HEADER include/spdk/trace_parser.h 00:03:11.834 TEST_HEADER include/spdk/tree.h 00:03:11.834 TEST_HEADER include/spdk/ublk.h 00:03:11.834 TEST_HEADER include/spdk/util.h 00:03:11.834 TEST_HEADER include/spdk/uuid.h 00:03:11.834 TEST_HEADER include/spdk/version.h 00:03:11.834 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:12.093 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:12.093 TEST_HEADER include/spdk/vhost.h 00:03:12.093 TEST_HEADER include/spdk/vmd.h 00:03:12.093 TEST_HEADER include/spdk/xor.h 00:03:12.093 TEST_HEADER include/spdk/zipf.h 00:03:12.093 CXX test/cpp_headers/accel.o 00:03:12.093 CC examples/vmd/lsvmd/lsvmd.o 00:03:12.093 LINK spdk_nvme_perf 00:03:12.093 LINK spdk_dd 00:03:12.093 LINK bdev_svc 00:03:12.093 LINK spdk_nvme_identify 00:03:12.093 LINK lsvmd 00:03:12.093 CXX test/cpp_headers/accel_module.o 00:03:12.093 CC examples/ioat/verify/verify.o 00:03:12.352 LINK test_dma 00:03:12.352 CXX test/cpp_headers/assert.o 00:03:12.352 LINK verify 00:03:12.352 CC examples/vmd/led/led.o 00:03:12.352 LINK spdk_nvme 00:03:12.352 CC examples/idxd/perf/perf.o 00:03:12.352 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:12.352 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:12.352 CXX test/cpp_headers/barrier.o 00:03:12.610 CXX test/cpp_headers/base64.o 00:03:12.610 CXX test/cpp_headers/bdev.o 00:03:12.610 CC examples/thread/thread/thread_ex.o 00:03:12.610 LINK led 00:03:12.610 CC app/fio/bdev/fio_plugin.o 00:03:12.611 LINK spdk_top 00:03:12.611 LINK interrupt_tgt 00:03:12.611 CXX test/cpp_headers/bdev_module.o 00:03:12.868 LINK idxd_perf 00:03:12.868 CC examples/sock/hello_world/hello_sock.o 00:03:12.868 LINK thread 00:03:12.868 CC test/env/vtophys/vtophys.o 00:03:12.868 CC test/app/histogram_perf/histogram_perf.o 00:03:12.868 CC test/app/jsoncat/jsoncat.o 00:03:12.868 CC test/env/mem_callbacks/mem_callbacks.o 00:03:12.868 LINK nvme_fuzz 00:03:12.868 CXX test/cpp_headers/bdev_zone.o 00:03:13.126 CC test/app/stub/stub.o 00:03:13.126 LINK vtophys 00:03:13.126 LINK histogram_perf 00:03:13.126 LINK jsoncat 00:03:13.126 LINK hello_sock 00:03:13.126 CXX test/cpp_headers/bit_array.o 00:03:13.126 LINK spdk_bdev 00:03:13.126 CC test/event/event_perf/event_perf.o 00:03:13.126 CXX test/cpp_headers/bit_pool.o 00:03:13.126 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:13.126 LINK stub 00:03:13.385 CXX test/cpp_headers/blob_bdev.o 00:03:13.385 LINK event_perf 00:03:13.385 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:13.385 CC examples/accel/perf/accel_perf.o 00:03:13.385 CC app/vhost/vhost.o 00:03:13.385 CC test/env/memory/memory_ut.o 00:03:13.385 CXX test/cpp_headers/blobfs_bdev.o 00:03:13.385 CC examples/blob/hello_world/hello_blob.o 00:03:13.646 LINK mem_callbacks 00:03:13.646 CC test/event/reactor/reactor.o 00:03:13.646 LINK env_dpdk_post_init 00:03:13.646 LINK vhost 00:03:13.646 CXX test/cpp_headers/blobfs.o 00:03:13.646 LINK reactor 00:03:13.646 CC test/env/pci/pci_ut.o 00:03:13.646 LINK hello_blob 00:03:13.906 CXX test/cpp_headers/blob.o 00:03:13.906 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:13.906 CC examples/nvme/hello_world/hello_world.o 00:03:13.906 CC test/event/reactor_perf/reactor_perf.o 00:03:13.906 CXX test/cpp_headers/conf.o 00:03:13.906 CC test/nvme/aer/aer.o 00:03:13.906 LINK accel_perf 00:03:14.165 LINK reactor_perf 00:03:14.165 CC examples/blob/cli/blobcli.o 00:03:14.165 LINK pci_ut 00:03:14.165 LINK hello_fsdev 00:03:14.165 CXX test/cpp_headers/config.o 00:03:14.165 LINK hello_world 00:03:14.165 CXX test/cpp_headers/cpuset.o 00:03:14.165 CXX test/cpp_headers/crc16.o 00:03:14.425 CC test/event/app_repeat/app_repeat.o 00:03:14.425 LINK aer 00:03:14.425 CXX test/cpp_headers/crc32.o 00:03:14.425 CXX test/cpp_headers/crc64.o 00:03:14.425 CC examples/nvme/reconnect/reconnect.o 00:03:14.425 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:14.425 LINK app_repeat 00:03:14.425 CC test/rpc_client/rpc_client_test.o 00:03:14.685 CXX test/cpp_headers/dif.o 00:03:14.685 CC test/nvme/reset/reset.o 00:03:14.685 LINK blobcli 00:03:14.685 LINK rpc_client_test 00:03:14.685 LINK memory_ut 00:03:14.685 CC test/accel/dif/dif.o 00:03:14.685 CXX test/cpp_headers/dma.o 00:03:14.685 CC test/event/scheduler/scheduler.o 00:03:14.685 LINK reconnect 00:03:14.944 CXX test/cpp_headers/endian.o 00:03:14.944 CXX test/cpp_headers/env_dpdk.o 00:03:14.944 LINK reset 00:03:14.944 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:14.944 LINK nvme_manage 00:03:14.944 LINK scheduler 00:03:14.944 CC test/nvme/sgl/sgl.o 00:03:14.944 CC test/nvme/e2edp/nvme_dp.o 00:03:14.944 CXX test/cpp_headers/env.o 00:03:14.944 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:15.204 CC test/nvme/overhead/overhead.o 00:03:15.204 CXX test/cpp_headers/event.o 00:03:15.204 CC examples/nvme/arbitration/arbitration.o 00:03:15.204 CC test/nvme/err_injection/err_injection.o 00:03:15.204 CC examples/bdev/hello_world/hello_bdev.o 00:03:15.204 LINK sgl 00:03:15.204 LINK iscsi_fuzz 00:03:15.204 LINK nvme_dp 00:03:15.464 CXX test/cpp_headers/fd_group.o 00:03:15.464 LINK overhead 00:03:15.464 LINK err_injection 00:03:15.464 CXX test/cpp_headers/fd.o 00:03:15.464 LINK dif 00:03:15.464 CXX test/cpp_headers/file.o 00:03:15.464 CXX test/cpp_headers/fsdev.o 00:03:15.464 LINK hello_bdev 00:03:15.464 LINK vhost_fuzz 00:03:15.725 LINK arbitration 00:03:15.725 CXX test/cpp_headers/fsdev_module.o 00:03:15.725 CC test/nvme/startup/startup.o 00:03:15.725 CXX test/cpp_headers/ftl.o 00:03:15.725 CC examples/bdev/bdevperf/bdevperf.o 00:03:15.725 CC test/nvme/reserve/reserve.o 00:03:15.725 CC test/nvme/simple_copy/simple_copy.o 00:03:15.725 CC test/nvme/connect_stress/connect_stress.o 00:03:15.725 LINK startup 00:03:15.725 CXX test/cpp_headers/fuse_dispatcher.o 00:03:15.989 CC test/blobfs/mkfs/mkfs.o 00:03:15.989 CC examples/nvme/hotplug/hotplug.o 00:03:15.989 CC test/nvme/boot_partition/boot_partition.o 00:03:15.989 CC test/lvol/esnap/esnap.o 00:03:15.989 LINK reserve 00:03:15.989 LINK connect_stress 00:03:15.989 LINK simple_copy 00:03:15.989 CXX test/cpp_headers/gpt_spec.o 00:03:15.989 LINK boot_partition 00:03:15.989 LINK mkfs 00:03:15.989 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:15.989 LINK hotplug 00:03:16.266 CXX test/cpp_headers/hexlify.o 00:03:16.267 CC test/nvme/compliance/nvme_compliance.o 00:03:16.267 CC test/nvme/fused_ordering/fused_ordering.o 00:03:16.267 LINK cmb_copy 00:03:16.267 CXX test/cpp_headers/histogram_data.o 00:03:16.267 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:16.267 CC test/bdev/bdevio/bdevio.o 00:03:16.267 CC examples/nvme/abort/abort.o 00:03:16.541 CC test/nvme/fdp/fdp.o 00:03:16.541 LINK fused_ordering 00:03:16.541 CXX test/cpp_headers/idxd.o 00:03:16.541 LINK doorbell_aers 00:03:16.541 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:16.541 LINK nvme_compliance 00:03:16.541 CXX test/cpp_headers/idxd_spec.o 00:03:16.541 LINK bdevperf 00:03:16.541 CXX test/cpp_headers/init.o 00:03:16.541 CXX test/cpp_headers/ioat.o 00:03:16.801 LINK pmr_persistence 00:03:16.801 CXX test/cpp_headers/ioat_spec.o 00:03:16.801 CC test/nvme/cuse/cuse.o 00:03:16.801 LINK fdp 00:03:16.801 LINK abort 00:03:16.801 CXX test/cpp_headers/iscsi_spec.o 00:03:16.801 LINK bdevio 00:03:16.801 CXX test/cpp_headers/json.o 00:03:16.801 CXX test/cpp_headers/jsonrpc.o 00:03:16.801 CXX test/cpp_headers/keyring.o 00:03:16.801 CXX test/cpp_headers/keyring_module.o 00:03:16.801 CXX test/cpp_headers/likely.o 00:03:16.801 CXX test/cpp_headers/log.o 00:03:16.801 CXX test/cpp_headers/lvol.o 00:03:17.061 CXX test/cpp_headers/md5.o 00:03:17.061 CXX test/cpp_headers/memory.o 00:03:17.061 CXX test/cpp_headers/mmio.o 00:03:17.061 CXX test/cpp_headers/nbd.o 00:03:17.061 CXX test/cpp_headers/net.o 00:03:17.061 CXX test/cpp_headers/notify.o 00:03:17.061 CXX test/cpp_headers/nvme.o 00:03:17.061 CXX test/cpp_headers/nvme_intel.o 00:03:17.061 CC examples/nvmf/nvmf/nvmf.o 00:03:17.061 CXX test/cpp_headers/nvme_ocssd.o 00:03:17.061 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:17.061 CXX test/cpp_headers/nvme_spec.o 00:03:17.320 CXX test/cpp_headers/nvme_zns.o 00:03:17.320 CXX test/cpp_headers/nvmf_cmd.o 00:03:17.320 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:17.320 CXX test/cpp_headers/nvmf.o 00:03:17.320 CXX test/cpp_headers/nvmf_spec.o 00:03:17.320 CXX test/cpp_headers/nvmf_transport.o 00:03:17.320 CXX test/cpp_headers/opal.o 00:03:17.320 CXX test/cpp_headers/opal_spec.o 00:03:17.320 CXX test/cpp_headers/pci_ids.o 00:03:17.320 LINK nvmf 00:03:17.320 CXX test/cpp_headers/pipe.o 00:03:17.320 CXX test/cpp_headers/queue.o 00:03:17.320 CXX test/cpp_headers/reduce.o 00:03:17.580 CXX test/cpp_headers/rpc.o 00:03:17.580 CXX test/cpp_headers/scheduler.o 00:03:17.580 CXX test/cpp_headers/scsi.o 00:03:17.580 CXX test/cpp_headers/scsi_spec.o 00:03:17.580 CXX test/cpp_headers/sock.o 00:03:17.580 CXX test/cpp_headers/stdinc.o 00:03:17.580 CXX test/cpp_headers/string.o 00:03:17.580 CXX test/cpp_headers/thread.o 00:03:17.580 CXX test/cpp_headers/trace.o 00:03:17.580 CXX test/cpp_headers/trace_parser.o 00:03:17.580 CXX test/cpp_headers/tree.o 00:03:17.580 CXX test/cpp_headers/ublk.o 00:03:17.580 CXX test/cpp_headers/util.o 00:03:17.839 CXX test/cpp_headers/uuid.o 00:03:17.839 CXX test/cpp_headers/version.o 00:03:17.839 CXX test/cpp_headers/vfio_user_pci.o 00:03:17.839 CXX test/cpp_headers/vfio_user_spec.o 00:03:17.839 CXX test/cpp_headers/vhost.o 00:03:17.839 CXX test/cpp_headers/vmd.o 00:03:17.839 CXX test/cpp_headers/xor.o 00:03:17.839 CXX test/cpp_headers/zipf.o 00:03:18.127 LINK cuse 00:03:22.326 LINK esnap 00:03:22.326 00:03:22.326 real 1m27.205s 00:03:22.326 user 7m22.988s 00:03:22.326 sys 1m40.742s 00:03:22.326 11:41:48 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:22.326 11:41:48 make -- common/autotest_common.sh@10 -- $ set +x 00:03:22.326 ************************************ 00:03:22.326 END TEST make 00:03:22.326 ************************************ 00:03:22.326 11:41:48 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:22.326 11:41:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:22.326 11:41:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:22.326 11:41:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.326 11:41:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:22.326 11:41:48 -- pm/common@44 -- $ pid=5478 00:03:22.326 11:41:48 -- pm/common@50 -- $ kill -TERM 5478 00:03:22.326 11:41:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.326 11:41:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:22.326 11:41:48 -- pm/common@44 -- $ pid=5480 00:03:22.326 11:41:48 -- pm/common@50 -- $ kill -TERM 5480 00:03:22.326 11:41:48 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:22.326 11:41:48 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:22.326 11:41:48 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:22.326 11:41:48 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:22.326 11:41:48 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:22.326 11:41:48 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:22.326 11:41:48 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:22.326 11:41:48 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:22.326 11:41:48 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:22.326 11:41:48 -- scripts/common.sh@336 -- # IFS=.-: 00:03:22.326 11:41:48 -- scripts/common.sh@336 -- # read -ra ver1 00:03:22.326 11:41:48 -- scripts/common.sh@337 -- # IFS=.-: 00:03:22.326 11:41:48 -- scripts/common.sh@337 -- # read -ra ver2 00:03:22.326 11:41:48 -- scripts/common.sh@338 -- # local 'op=<' 00:03:22.326 11:41:48 -- scripts/common.sh@340 -- # ver1_l=2 00:03:22.326 11:41:48 -- scripts/common.sh@341 -- # ver2_l=1 00:03:22.326 11:41:48 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:22.326 11:41:48 -- scripts/common.sh@344 -- # case "$op" in 00:03:22.326 11:41:48 -- scripts/common.sh@345 -- # : 1 00:03:22.326 11:41:48 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:22.326 11:41:48 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:22.326 11:41:48 -- scripts/common.sh@365 -- # decimal 1 00:03:22.326 11:41:48 -- scripts/common.sh@353 -- # local d=1 00:03:22.326 11:41:48 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:22.326 11:41:48 -- scripts/common.sh@355 -- # echo 1 00:03:22.326 11:41:48 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:22.326 11:41:48 -- scripts/common.sh@366 -- # decimal 2 00:03:22.326 11:41:48 -- scripts/common.sh@353 -- # local d=2 00:03:22.326 11:41:48 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:22.327 11:41:48 -- scripts/common.sh@355 -- # echo 2 00:03:22.327 11:41:48 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:22.327 11:41:48 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:22.327 11:41:48 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:22.327 11:41:48 -- scripts/common.sh@368 -- # return 0 00:03:22.327 11:41:48 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:22.327 11:41:48 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:22.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.327 --rc genhtml_branch_coverage=1 00:03:22.327 --rc genhtml_function_coverage=1 00:03:22.327 --rc genhtml_legend=1 00:03:22.327 --rc geninfo_all_blocks=1 00:03:22.327 --rc geninfo_unexecuted_blocks=1 00:03:22.327 00:03:22.327 ' 00:03:22.327 11:41:48 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:22.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.327 --rc genhtml_branch_coverage=1 00:03:22.327 --rc genhtml_function_coverage=1 00:03:22.327 --rc genhtml_legend=1 00:03:22.327 --rc geninfo_all_blocks=1 00:03:22.327 --rc geninfo_unexecuted_blocks=1 00:03:22.327 00:03:22.327 ' 00:03:22.327 11:41:48 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:22.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.327 --rc genhtml_branch_coverage=1 00:03:22.327 --rc genhtml_function_coverage=1 00:03:22.327 --rc genhtml_legend=1 00:03:22.327 --rc geninfo_all_blocks=1 00:03:22.327 --rc geninfo_unexecuted_blocks=1 00:03:22.327 00:03:22.327 ' 00:03:22.327 11:41:48 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:22.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.327 --rc genhtml_branch_coverage=1 00:03:22.327 --rc genhtml_function_coverage=1 00:03:22.327 --rc genhtml_legend=1 00:03:22.327 --rc geninfo_all_blocks=1 00:03:22.327 --rc geninfo_unexecuted_blocks=1 00:03:22.327 00:03:22.327 ' 00:03:22.327 11:41:48 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:22.327 11:41:48 -- nvmf/common.sh@7 -- # uname -s 00:03:22.327 11:41:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:22.327 11:41:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:22.327 11:41:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:22.327 11:41:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:22.327 11:41:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:22.327 11:41:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:22.327 11:41:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:22.327 11:41:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:22.327 11:41:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:22.327 11:41:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:22.327 11:41:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:205432e6-0d85-4ef4-92fc-cf1aa4632adc 00:03:22.327 11:41:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=205432e6-0d85-4ef4-92fc-cf1aa4632adc 00:03:22.327 11:41:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:22.327 11:41:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:22.327 11:41:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:22.327 11:41:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:22.327 11:41:48 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:22.327 11:41:48 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:22.327 11:41:48 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:22.327 11:41:48 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:22.327 11:41:48 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:22.327 11:41:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.327 11:41:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.327 11:41:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.327 11:41:48 -- paths/export.sh@5 -- # export PATH 00:03:22.327 11:41:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.327 11:41:48 -- nvmf/common.sh@51 -- # : 0 00:03:22.327 11:41:48 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:22.327 11:41:48 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:22.327 11:41:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:22.327 11:41:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:22.327 11:41:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:22.327 11:41:48 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:22.327 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:22.327 11:41:48 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:22.327 11:41:48 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:22.327 11:41:48 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:22.327 11:41:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:22.327 11:41:48 -- spdk/autotest.sh@32 -- # uname -s 00:03:22.327 11:41:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:22.327 11:41:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:22.327 11:41:48 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:22.327 11:41:48 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:22.327 11:41:48 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:22.327 11:41:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:22.327 11:41:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:22.327 11:41:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:22.327 11:41:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:22.327 11:41:48 -- spdk/autotest.sh@48 -- # udevadm_pid=54462 00:03:22.327 11:41:48 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:22.327 11:41:48 -- pm/common@17 -- # local monitor 00:03:22.327 11:41:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.327 11:41:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.327 11:41:48 -- pm/common@21 -- # date +%s 00:03:22.586 11:41:48 -- pm/common@25 -- # sleep 1 00:03:22.586 11:41:48 -- pm/common@21 -- # date +%s 00:03:22.586 11:41:48 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732707708 00:03:22.586 11:41:48 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732707708 00:03:22.586 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732707708_collect-cpu-load.pm.log 00:03:22.586 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732707708_collect-vmstat.pm.log 00:03:23.525 11:41:49 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:23.525 11:41:49 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:23.525 11:41:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:23.525 11:41:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.525 11:41:49 -- spdk/autotest.sh@59 -- # create_test_list 00:03:23.525 11:41:49 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:23.525 11:41:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.525 11:41:49 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:23.525 11:41:49 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:23.525 11:41:49 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:23.525 11:41:49 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:23.525 11:41:49 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:23.525 11:41:49 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:23.525 11:41:49 -- common/autotest_common.sh@1457 -- # uname 00:03:23.525 11:41:49 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:23.525 11:41:49 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:23.525 11:41:49 -- common/autotest_common.sh@1477 -- # uname 00:03:23.525 11:41:49 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:23.525 11:41:49 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:23.525 11:41:49 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:23.525 lcov: LCOV version 1.15 00:03:23.525 11:41:49 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:38.418 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:38.418 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:53.367 11:42:18 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:53.367 11:42:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.367 11:42:18 -- common/autotest_common.sh@10 -- # set +x 00:03:53.367 11:42:18 -- spdk/autotest.sh@78 -- # rm -f 00:03:53.367 11:42:18 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.367 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.628 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:53.628 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:53.628 11:42:19 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:53.628 11:42:19 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:53.628 11:42:19 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:53.628 11:42:19 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:53.628 11:42:19 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:53.628 11:42:19 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:53.628 11:42:19 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:53.628 11:42:19 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:03:53.628 11:42:19 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:53.628 11:42:19 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:53.628 11:42:19 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:53.628 11:42:19 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.628 11:42:19 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:53.628 11:42:19 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:53.628 11:42:19 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:03:53.628 11:42:19 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:53.628 11:42:19 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:53.628 11:42:19 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:53.628 11:42:19 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:53.628 11:42:19 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:53.628 11:42:19 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:53.628 11:42:19 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:03:53.628 11:42:19 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:53.628 11:42:19 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:53.628 11:42:19 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:53.628 11:42:19 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:53.628 11:42:19 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:03:53.628 11:42:19 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:53.628 11:42:19 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:53.628 11:42:19 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:53.628 11:42:19 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:53.628 11:42:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.628 11:42:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.628 11:42:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:53.628 11:42:19 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:53.628 11:42:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:53.628 No valid GPT data, bailing 00:03:53.628 11:42:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:53.628 11:42:19 -- scripts/common.sh@394 -- # pt= 00:03:53.628 11:42:19 -- scripts/common.sh@395 -- # return 1 00:03:53.628 11:42:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:53.628 1+0 records in 00:03:53.628 1+0 records out 00:03:53.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00674411 s, 155 MB/s 00:03:53.628 11:42:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.628 11:42:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.628 11:42:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:53.628 11:42:19 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:53.628 11:42:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:53.887 No valid GPT data, bailing 00:03:53.887 11:42:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:53.887 11:42:20 -- scripts/common.sh@394 -- # pt= 00:03:53.887 11:42:20 -- scripts/common.sh@395 -- # return 1 00:03:53.887 11:42:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:53.887 1+0 records in 00:03:53.887 1+0 records out 00:03:53.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00626138 s, 167 MB/s 00:03:53.887 11:42:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.887 11:42:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.887 11:42:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:53.887 11:42:20 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:53.887 11:42:20 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:53.887 No valid GPT data, bailing 00:03:53.887 11:42:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:53.887 11:42:20 -- scripts/common.sh@394 -- # pt= 00:03:53.887 11:42:20 -- scripts/common.sh@395 -- # return 1 00:03:53.887 11:42:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:53.887 1+0 records in 00:03:53.887 1+0 records out 00:03:53.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00631624 s, 166 MB/s 00:03:53.887 11:42:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.887 11:42:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.887 11:42:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:53.887 11:42:20 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:53.887 11:42:20 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:53.887 No valid GPT data, bailing 00:03:53.887 11:42:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:53.887 11:42:20 -- scripts/common.sh@394 -- # pt= 00:03:53.887 11:42:20 -- scripts/common.sh@395 -- # return 1 00:03:53.887 11:42:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:53.887 1+0 records in 00:03:53.887 1+0 records out 00:03:53.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00638389 s, 164 MB/s 00:03:53.888 11:42:20 -- spdk/autotest.sh@105 -- # sync 00:03:54.147 11:42:20 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:54.147 11:42:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:54.147 11:42:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:56.689 11:42:22 -- spdk/autotest.sh@111 -- # uname -s 00:03:56.689 11:42:23 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:56.689 11:42:23 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:56.689 11:42:23 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:57.631 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:57.631 Hugepages 00:03:57.631 node hugesize free / total 00:03:57.631 node0 1048576kB 0 / 0 00:03:57.631 node0 2048kB 0 / 0 00:03:57.631 00:03:57.631 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:57.631 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:57.892 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:57.892 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:57.892 11:42:24 -- spdk/autotest.sh@117 -- # uname -s 00:03:57.892 11:42:24 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:57.892 11:42:24 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:57.892 11:42:24 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:58.833 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.833 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:58.833 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:58.833 11:42:25 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:59.774 11:42:26 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:59.774 11:42:26 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:59.774 11:42:26 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:59.774 11:42:26 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:59.774 11:42:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:59.774 11:42:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:59.774 11:42:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:59.774 11:42:26 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:59.774 11:42:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:00.034 11:42:26 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:00.034 11:42:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:00.034 11:42:26 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.605 Waiting for block devices as requested 00:04:00.605 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:00.605 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:00.605 11:42:26 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:00.605 11:42:26 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:00.605 11:42:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:00.605 11:42:26 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:00.605 11:42:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:00.605 11:42:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:00.605 11:42:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:00.605 11:42:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:00.605 11:42:26 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:00.605 11:42:26 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:00.605 11:42:26 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:00.605 11:42:26 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:00.605 11:42:26 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:00.865 11:42:26 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:00.865 11:42:26 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:00.865 11:42:26 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:00.865 11:42:26 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:00.865 11:42:26 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:00.865 11:42:26 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:00.865 11:42:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:00.865 11:42:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:00.865 11:42:27 -- common/autotest_common.sh@1543 -- # continue 00:04:00.865 11:42:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:00.865 11:42:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:00.865 11:42:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:00.865 11:42:27 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:00.865 11:42:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:00.865 11:42:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:00.865 11:42:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:00.865 11:42:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:00.865 11:42:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:00.865 11:42:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:00.865 11:42:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:00.865 11:42:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:00.865 11:42:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:00.865 11:42:27 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:00.865 11:42:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:00.865 11:42:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:00.865 11:42:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:00.865 11:42:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:00.865 11:42:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:00.865 11:42:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:00.865 11:42:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:00.865 11:42:27 -- common/autotest_common.sh@1543 -- # continue 00:04:00.865 11:42:27 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:00.865 11:42:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:00.865 11:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:00.865 11:42:27 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:00.865 11:42:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.865 11:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:00.865 11:42:27 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.805 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.805 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.805 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.805 11:42:28 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:01.805 11:42:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:01.805 11:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:01.805 11:42:28 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:01.805 11:42:28 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:01.805 11:42:28 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:01.805 11:42:28 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:01.805 11:42:28 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:01.805 11:42:28 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:01.805 11:42:28 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:01.805 11:42:28 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:01.805 11:42:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:01.805 11:42:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:01.805 11:42:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.805 11:42:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:01.805 11:42:28 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:02.064 11:42:28 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:02.064 11:42:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:02.064 11:42:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:02.064 11:42:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:02.064 11:42:28 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:02.064 11:42:28 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:02.064 11:42:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:02.064 11:42:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:02.064 11:42:28 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:02.064 11:42:28 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:02.064 11:42:28 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:02.064 11:42:28 -- common/autotest_common.sh@1572 -- # return 0 00:04:02.064 11:42:28 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:02.064 11:42:28 -- common/autotest_common.sh@1580 -- # return 0 00:04:02.064 11:42:28 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:02.064 11:42:28 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:02.064 11:42:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:02.064 11:42:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:02.064 11:42:28 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:02.064 11:42:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.064 11:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:02.064 11:42:28 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:02.064 11:42:28 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:02.064 11:42:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.064 11:42:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.064 11:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:02.064 ************************************ 00:04:02.064 START TEST env 00:04:02.064 ************************************ 00:04:02.064 11:42:28 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:02.064 * Looking for test storage... 00:04:02.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:02.064 11:42:28 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:02.064 11:42:28 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:02.064 11:42:28 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:02.324 11:42:28 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:02.324 11:42:28 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.324 11:42:28 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.324 11:42:28 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.324 11:42:28 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.324 11:42:28 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.324 11:42:28 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.324 11:42:28 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.324 11:42:28 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.324 11:42:28 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.324 11:42:28 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.324 11:42:28 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.324 11:42:28 env -- scripts/common.sh@344 -- # case "$op" in 00:04:02.324 11:42:28 env -- scripts/common.sh@345 -- # : 1 00:04:02.324 11:42:28 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.324 11:42:28 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.324 11:42:28 env -- scripts/common.sh@365 -- # decimal 1 00:04:02.324 11:42:28 env -- scripts/common.sh@353 -- # local d=1 00:04:02.324 11:42:28 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.324 11:42:28 env -- scripts/common.sh@355 -- # echo 1 00:04:02.324 11:42:28 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.324 11:42:28 env -- scripts/common.sh@366 -- # decimal 2 00:04:02.324 11:42:28 env -- scripts/common.sh@353 -- # local d=2 00:04:02.324 11:42:28 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.324 11:42:28 env -- scripts/common.sh@355 -- # echo 2 00:04:02.324 11:42:28 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.324 11:42:28 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.324 11:42:28 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.324 11:42:28 env -- scripts/common.sh@368 -- # return 0 00:04:02.324 11:42:28 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.324 11:42:28 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:02.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.324 --rc genhtml_branch_coverage=1 00:04:02.324 --rc genhtml_function_coverage=1 00:04:02.324 --rc genhtml_legend=1 00:04:02.324 --rc geninfo_all_blocks=1 00:04:02.324 --rc geninfo_unexecuted_blocks=1 00:04:02.324 00:04:02.324 ' 00:04:02.324 11:42:28 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:02.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.324 --rc genhtml_branch_coverage=1 00:04:02.324 --rc genhtml_function_coverage=1 00:04:02.324 --rc genhtml_legend=1 00:04:02.324 --rc geninfo_all_blocks=1 00:04:02.324 --rc geninfo_unexecuted_blocks=1 00:04:02.324 00:04:02.324 ' 00:04:02.324 11:42:28 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:02.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.324 --rc genhtml_branch_coverage=1 00:04:02.324 --rc genhtml_function_coverage=1 00:04:02.324 --rc genhtml_legend=1 00:04:02.324 --rc geninfo_all_blocks=1 00:04:02.324 --rc geninfo_unexecuted_blocks=1 00:04:02.324 00:04:02.324 ' 00:04:02.324 11:42:28 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:02.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.324 --rc genhtml_branch_coverage=1 00:04:02.324 --rc genhtml_function_coverage=1 00:04:02.324 --rc genhtml_legend=1 00:04:02.324 --rc geninfo_all_blocks=1 00:04:02.324 --rc geninfo_unexecuted_blocks=1 00:04:02.324 00:04:02.324 ' 00:04:02.324 11:42:28 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:02.324 11:42:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.324 11:42:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.324 11:42:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.324 ************************************ 00:04:02.324 START TEST env_memory 00:04:02.324 ************************************ 00:04:02.324 11:42:28 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:02.324 00:04:02.324 00:04:02.324 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.324 http://cunit.sourceforge.net/ 00:04:02.324 00:04:02.324 00:04:02.324 Suite: memory 00:04:02.324 Test: alloc and free memory map ...[2024-11-27 11:42:28.582045] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:02.324 passed 00:04:02.324 Test: mem map translation ...[2024-11-27 11:42:28.627763] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:02.324 [2024-11-27 11:42:28.627845] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:02.324 [2024-11-27 11:42:28.627916] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:02.324 [2024-11-27 11:42:28.627940] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:02.324 passed 00:04:02.324 Test: mem map registration ...[2024-11-27 11:42:28.695228] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:02.324 [2024-11-27 11:42:28.695286] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:02.585 passed 00:04:02.585 Test: mem map adjacent registrations ...passed 00:04:02.585 00:04:02.585 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.585 suites 1 1 n/a 0 0 00:04:02.585 tests 4 4 4 0 0 00:04:02.585 asserts 152 152 152 0 n/a 00:04:02.585 00:04:02.585 Elapsed time = 0.245 seconds 00:04:02.585 00:04:02.585 real 0m0.298s 00:04:02.585 user 0m0.257s 00:04:02.585 sys 0m0.028s 00:04:02.585 11:42:28 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.585 11:42:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:02.585 ************************************ 00:04:02.585 END TEST env_memory 00:04:02.585 ************************************ 00:04:02.585 11:42:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:02.585 11:42:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.585 11:42:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.585 11:42:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.585 ************************************ 00:04:02.585 START TEST env_vtophys 00:04:02.585 ************************************ 00:04:02.585 11:42:28 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:02.585 EAL: lib.eal log level changed from notice to debug 00:04:02.585 EAL: Detected lcore 0 as core 0 on socket 0 00:04:02.585 EAL: Detected lcore 1 as core 0 on socket 0 00:04:02.585 EAL: Detected lcore 2 as core 0 on socket 0 00:04:02.585 EAL: Detected lcore 3 as core 0 on socket 0 00:04:02.585 EAL: Detected lcore 4 as core 0 on socket 0 00:04:02.585 EAL: Detected lcore 5 as core 0 on socket 0 00:04:02.585 EAL: Detected lcore 6 as core 0 on socket 0 00:04:02.585 EAL: Detected lcore 7 as core 0 on socket 0 00:04:02.585 EAL: Detected lcore 8 as core 0 on socket 0 00:04:02.585 EAL: Detected lcore 9 as core 0 on socket 0 00:04:02.585 EAL: Maximum logical cores by configuration: 128 00:04:02.585 EAL: Detected CPU lcores: 10 00:04:02.585 EAL: Detected NUMA nodes: 1 00:04:02.585 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:02.585 EAL: Detected shared linkage of DPDK 00:04:02.585 EAL: No shared files mode enabled, IPC will be disabled 00:04:02.585 EAL: Selected IOVA mode 'PA' 00:04:02.585 EAL: Probing VFIO support... 00:04:02.585 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:02.585 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:02.585 EAL: Ask a virtual area of 0x2e000 bytes 00:04:02.585 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:02.585 EAL: Setting up physically contiguous memory... 00:04:02.585 EAL: Setting maximum number of open files to 524288 00:04:02.585 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:02.585 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:02.585 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.585 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:02.585 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.585 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.585 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:02.585 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:02.585 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.585 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:02.585 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.585 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.585 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:02.585 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:02.585 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.585 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:02.585 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.585 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.585 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:02.585 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:02.585 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.585 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:02.585 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.585 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.585 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:02.585 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:02.585 EAL: Hugepages will be freed exactly as allocated. 00:04:02.585 EAL: No shared files mode enabled, IPC is disabled 00:04:02.585 EAL: No shared files mode enabled, IPC is disabled 00:04:02.845 EAL: TSC frequency is ~2290000 KHz 00:04:02.845 EAL: Main lcore 0 is ready (tid=7feac66d3a40;cpuset=[0]) 00:04:02.845 EAL: Trying to obtain current memory policy. 00:04:02.845 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.845 EAL: Restoring previous memory policy: 0 00:04:02.845 EAL: request: mp_malloc_sync 00:04:02.845 EAL: No shared files mode enabled, IPC is disabled 00:04:02.845 EAL: Heap on socket 0 was expanded by 2MB 00:04:02.845 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:02.845 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:02.845 EAL: Mem event callback 'spdk:(nil)' registered 00:04:02.845 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:02.845 00:04:02.845 00:04:02.845 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.845 http://cunit.sourceforge.net/ 00:04:02.845 00:04:02.845 00:04:02.845 Suite: components_suite 00:04:03.107 Test: vtophys_malloc_test ...passed 00:04:03.107 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:03.107 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.107 EAL: Restoring previous memory policy: 4 00:04:03.107 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.107 EAL: request: mp_malloc_sync 00:04:03.107 EAL: No shared files mode enabled, IPC is disabled 00:04:03.107 EAL: Heap on socket 0 was expanded by 4MB 00:04:03.107 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.107 EAL: request: mp_malloc_sync 00:04:03.107 EAL: No shared files mode enabled, IPC is disabled 00:04:03.107 EAL: Heap on socket 0 was shrunk by 4MB 00:04:03.107 EAL: Trying to obtain current memory policy. 00:04:03.107 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.107 EAL: Restoring previous memory policy: 4 00:04:03.107 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.107 EAL: request: mp_malloc_sync 00:04:03.107 EAL: No shared files mode enabled, IPC is disabled 00:04:03.107 EAL: Heap on socket 0 was expanded by 6MB 00:04:03.107 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.107 EAL: request: mp_malloc_sync 00:04:03.107 EAL: No shared files mode enabled, IPC is disabled 00:04:03.107 EAL: Heap on socket 0 was shrunk by 6MB 00:04:03.107 EAL: Trying to obtain current memory policy. 00:04:03.107 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.107 EAL: Restoring previous memory policy: 4 00:04:03.107 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.107 EAL: request: mp_malloc_sync 00:04:03.107 EAL: No shared files mode enabled, IPC is disabled 00:04:03.107 EAL: Heap on socket 0 was expanded by 10MB 00:04:03.366 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.366 EAL: request: mp_malloc_sync 00:04:03.366 EAL: No shared files mode enabled, IPC is disabled 00:04:03.366 EAL: Heap on socket 0 was shrunk by 10MB 00:04:03.366 EAL: Trying to obtain current memory policy. 00:04:03.366 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.366 EAL: Restoring previous memory policy: 4 00:04:03.366 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.366 EAL: request: mp_malloc_sync 00:04:03.366 EAL: No shared files mode enabled, IPC is disabled 00:04:03.366 EAL: Heap on socket 0 was expanded by 18MB 00:04:03.366 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.366 EAL: request: mp_malloc_sync 00:04:03.366 EAL: No shared files mode enabled, IPC is disabled 00:04:03.366 EAL: Heap on socket 0 was shrunk by 18MB 00:04:03.366 EAL: Trying to obtain current memory policy. 00:04:03.366 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.366 EAL: Restoring previous memory policy: 4 00:04:03.366 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.366 EAL: request: mp_malloc_sync 00:04:03.366 EAL: No shared files mode enabled, IPC is disabled 00:04:03.366 EAL: Heap on socket 0 was expanded by 34MB 00:04:03.366 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.366 EAL: request: mp_malloc_sync 00:04:03.366 EAL: No shared files mode enabled, IPC is disabled 00:04:03.366 EAL: Heap on socket 0 was shrunk by 34MB 00:04:03.366 EAL: Trying to obtain current memory policy. 00:04:03.366 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.366 EAL: Restoring previous memory policy: 4 00:04:03.366 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.366 EAL: request: mp_malloc_sync 00:04:03.366 EAL: No shared files mode enabled, IPC is disabled 00:04:03.366 EAL: Heap on socket 0 was expanded by 66MB 00:04:03.626 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.626 EAL: request: mp_malloc_sync 00:04:03.626 EAL: No shared files mode enabled, IPC is disabled 00:04:03.626 EAL: Heap on socket 0 was shrunk by 66MB 00:04:03.626 EAL: Trying to obtain current memory policy. 00:04:03.626 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.626 EAL: Restoring previous memory policy: 4 00:04:03.626 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.626 EAL: request: mp_malloc_sync 00:04:03.626 EAL: No shared files mode enabled, IPC is disabled 00:04:03.626 EAL: Heap on socket 0 was expanded by 130MB 00:04:03.886 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.886 EAL: request: mp_malloc_sync 00:04:03.886 EAL: No shared files mode enabled, IPC is disabled 00:04:03.886 EAL: Heap on socket 0 was shrunk by 130MB 00:04:04.146 EAL: Trying to obtain current memory policy. 00:04:04.146 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.146 EAL: Restoring previous memory policy: 4 00:04:04.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.146 EAL: request: mp_malloc_sync 00:04:04.146 EAL: No shared files mode enabled, IPC is disabled 00:04:04.146 EAL: Heap on socket 0 was expanded by 258MB 00:04:04.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.717 EAL: request: mp_malloc_sync 00:04:04.717 EAL: No shared files mode enabled, IPC is disabled 00:04:04.717 EAL: Heap on socket 0 was shrunk by 258MB 00:04:05.287 EAL: Trying to obtain current memory policy. 00:04:05.287 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.287 EAL: Restoring previous memory policy: 4 00:04:05.287 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.287 EAL: request: mp_malloc_sync 00:04:05.287 EAL: No shared files mode enabled, IPC is disabled 00:04:05.287 EAL: Heap on socket 0 was expanded by 514MB 00:04:06.228 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.228 EAL: request: mp_malloc_sync 00:04:06.228 EAL: No shared files mode enabled, IPC is disabled 00:04:06.228 EAL: Heap on socket 0 was shrunk by 514MB 00:04:07.170 EAL: Trying to obtain current memory policy. 00:04:07.170 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.170 EAL: Restoring previous memory policy: 4 00:04:07.170 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.170 EAL: request: mp_malloc_sync 00:04:07.170 EAL: No shared files mode enabled, IPC is disabled 00:04:07.170 EAL: Heap on socket 0 was expanded by 1026MB 00:04:09.080 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.080 EAL: request: mp_malloc_sync 00:04:09.080 EAL: No shared files mode enabled, IPC is disabled 00:04:09.080 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:10.990 passed 00:04:10.990 00:04:10.990 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.990 suites 1 1 n/a 0 0 00:04:10.990 tests 2 2 2 0 0 00:04:10.990 asserts 5747 5747 5747 0 n/a 00:04:10.990 00:04:10.990 Elapsed time = 7.938 seconds 00:04:10.990 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.990 EAL: request: mp_malloc_sync 00:04:10.990 EAL: No shared files mode enabled, IPC is disabled 00:04:10.990 EAL: Heap on socket 0 was shrunk by 2MB 00:04:10.990 EAL: No shared files mode enabled, IPC is disabled 00:04:10.990 EAL: No shared files mode enabled, IPC is disabled 00:04:10.990 EAL: No shared files mode enabled, IPC is disabled 00:04:10.990 00:04:10.990 real 0m8.251s 00:04:10.990 user 0m7.293s 00:04:10.990 sys 0m0.806s 00:04:10.990 11:42:37 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.990 11:42:37 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:10.990 ************************************ 00:04:10.990 END TEST env_vtophys 00:04:10.990 ************************************ 00:04:10.990 11:42:37 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:10.990 11:42:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.990 11:42:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.990 11:42:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.990 ************************************ 00:04:10.990 START TEST env_pci 00:04:10.990 ************************************ 00:04:10.990 11:42:37 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:10.990 00:04:10.990 00:04:10.990 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.990 http://cunit.sourceforge.net/ 00:04:10.990 00:04:10.990 00:04:10.990 Suite: pci 00:04:10.990 Test: pci_hook ...[2024-11-27 11:42:37.214940] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56778 has claimed it 00:04:10.990 passed 00:04:10.990 00:04:10.990 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.990 suites 1 1 n/a 0 0 00:04:10.990 tests 1 1 1 0 0 00:04:10.990 asserts 25 25 25 0 n/a 00:04:10.990 00:04:10.990 Elapsed time = 0.004 seconds 00:04:10.990 EAL: Cannot find device (10000:00:01.0) 00:04:10.990 EAL: Failed to attach device on primary process 00:04:10.990 00:04:10.990 real 0m0.096s 00:04:10.990 user 0m0.044s 00:04:10.990 sys 0m0.051s 00:04:10.990 11:42:37 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.990 11:42:37 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:10.990 ************************************ 00:04:10.990 END TEST env_pci 00:04:10.990 ************************************ 00:04:10.990 11:42:37 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:10.990 11:42:37 env -- env/env.sh@15 -- # uname 00:04:10.990 11:42:37 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:10.990 11:42:37 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:10.990 11:42:37 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:10.990 11:42:37 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:10.990 11:42:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.990 11:42:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.990 ************************************ 00:04:10.990 START TEST env_dpdk_post_init 00:04:10.990 ************************************ 00:04:10.990 11:42:37 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:11.249 EAL: Detected CPU lcores: 10 00:04:11.249 EAL: Detected NUMA nodes: 1 00:04:11.249 EAL: Detected shared linkage of DPDK 00:04:11.249 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:11.249 EAL: Selected IOVA mode 'PA' 00:04:11.249 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:11.249 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:11.249 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:11.249 Starting DPDK initialization... 00:04:11.249 Starting SPDK post initialization... 00:04:11.249 SPDK NVMe probe 00:04:11.249 Attaching to 0000:00:10.0 00:04:11.249 Attaching to 0000:00:11.0 00:04:11.249 Attached to 0000:00:10.0 00:04:11.249 Attached to 0000:00:11.0 00:04:11.249 Cleaning up... 00:04:11.249 00:04:11.249 real 0m0.280s 00:04:11.249 user 0m0.092s 00:04:11.249 sys 0m0.089s 00:04:11.249 11:42:37 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.249 11:42:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:11.249 ************************************ 00:04:11.249 END TEST env_dpdk_post_init 00:04:11.249 ************************************ 00:04:11.509 11:42:37 env -- env/env.sh@26 -- # uname 00:04:11.509 11:42:37 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:11.509 11:42:37 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:11.509 11:42:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.509 11:42:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.509 11:42:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.509 ************************************ 00:04:11.509 START TEST env_mem_callbacks 00:04:11.509 ************************************ 00:04:11.509 11:42:37 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:11.509 EAL: Detected CPU lcores: 10 00:04:11.509 EAL: Detected NUMA nodes: 1 00:04:11.509 EAL: Detected shared linkage of DPDK 00:04:11.509 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:11.509 EAL: Selected IOVA mode 'PA' 00:04:11.509 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:11.509 00:04:11.509 00:04:11.509 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.509 http://cunit.sourceforge.net/ 00:04:11.509 00:04:11.509 00:04:11.509 Suite: memory 00:04:11.509 Test: test ... 00:04:11.509 register 0x200000200000 2097152 00:04:11.509 malloc 3145728 00:04:11.509 register 0x200000400000 4194304 00:04:11.509 buf 0x2000004fffc0 len 3145728 PASSED 00:04:11.509 malloc 64 00:04:11.509 buf 0x2000004ffec0 len 64 PASSED 00:04:11.509 malloc 4194304 00:04:11.509 register 0x200000800000 6291456 00:04:11.509 buf 0x2000009fffc0 len 4194304 PASSED 00:04:11.509 free 0x2000004fffc0 3145728 00:04:11.509 free 0x2000004ffec0 64 00:04:11.509 unregister 0x200000400000 4194304 PASSED 00:04:11.509 free 0x2000009fffc0 4194304 00:04:11.778 unregister 0x200000800000 6291456 PASSED 00:04:11.778 malloc 8388608 00:04:11.778 register 0x200000400000 10485760 00:04:11.778 buf 0x2000005fffc0 len 8388608 PASSED 00:04:11.778 free 0x2000005fffc0 8388608 00:04:11.778 unregister 0x200000400000 10485760 PASSED 00:04:11.778 passed 00:04:11.778 00:04:11.778 Run Summary: Type Total Ran Passed Failed Inactive 00:04:11.778 suites 1 1 n/a 0 0 00:04:11.778 tests 1 1 1 0 0 00:04:11.778 asserts 15 15 15 0 n/a 00:04:11.778 00:04:11.778 Elapsed time = 0.079 seconds 00:04:11.778 00:04:11.778 real 0m0.270s 00:04:11.778 user 0m0.108s 00:04:11.778 sys 0m0.060s 00:04:11.778 11:42:37 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.778 11:42:37 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:11.778 ************************************ 00:04:11.778 END TEST env_mem_callbacks 00:04:11.778 ************************************ 00:04:11.778 00:04:11.778 real 0m9.727s 00:04:11.778 user 0m8.014s 00:04:11.778 sys 0m1.370s 00:04:11.778 11:42:38 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.778 11:42:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.778 ************************************ 00:04:11.778 END TEST env 00:04:11.778 ************************************ 00:04:11.778 11:42:38 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:11.778 11:42:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.778 11:42:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.778 11:42:38 -- common/autotest_common.sh@10 -- # set +x 00:04:11.778 ************************************ 00:04:11.778 START TEST rpc 00:04:11.778 ************************************ 00:04:11.778 11:42:38 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:12.049 * Looking for test storage... 00:04:12.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:12.049 11:42:38 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:12.049 11:42:38 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:12.049 11:42:38 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:12.049 11:42:38 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:12.049 11:42:38 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.049 11:42:38 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.049 11:42:38 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.049 11:42:38 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.049 11:42:38 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.049 11:42:38 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.049 11:42:38 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.049 11:42:38 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.049 11:42:38 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.049 11:42:38 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.049 11:42:38 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.049 11:42:38 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:12.049 11:42:38 rpc -- scripts/common.sh@345 -- # : 1 00:04:12.049 11:42:38 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.049 11:42:38 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.049 11:42:38 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:12.049 11:42:38 rpc -- scripts/common.sh@353 -- # local d=1 00:04:12.049 11:42:38 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.049 11:42:38 rpc -- scripts/common.sh@355 -- # echo 1 00:04:12.049 11:42:38 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.049 11:42:38 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:12.049 11:42:38 rpc -- scripts/common.sh@353 -- # local d=2 00:04:12.049 11:42:38 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.049 11:42:38 rpc -- scripts/common.sh@355 -- # echo 2 00:04:12.049 11:42:38 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.049 11:42:38 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.049 11:42:38 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.049 11:42:38 rpc -- scripts/common.sh@368 -- # return 0 00:04:12.049 11:42:38 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.049 11:42:38 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:12.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.049 --rc genhtml_branch_coverage=1 00:04:12.049 --rc genhtml_function_coverage=1 00:04:12.049 --rc genhtml_legend=1 00:04:12.049 --rc geninfo_all_blocks=1 00:04:12.049 --rc geninfo_unexecuted_blocks=1 00:04:12.049 00:04:12.049 ' 00:04:12.049 11:42:38 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:12.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.049 --rc genhtml_branch_coverage=1 00:04:12.049 --rc genhtml_function_coverage=1 00:04:12.049 --rc genhtml_legend=1 00:04:12.049 --rc geninfo_all_blocks=1 00:04:12.049 --rc geninfo_unexecuted_blocks=1 00:04:12.049 00:04:12.049 ' 00:04:12.049 11:42:38 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:12.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.049 --rc genhtml_branch_coverage=1 00:04:12.049 --rc genhtml_function_coverage=1 00:04:12.049 --rc genhtml_legend=1 00:04:12.049 --rc geninfo_all_blocks=1 00:04:12.049 --rc geninfo_unexecuted_blocks=1 00:04:12.049 00:04:12.049 ' 00:04:12.049 11:42:38 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:12.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.049 --rc genhtml_branch_coverage=1 00:04:12.049 --rc genhtml_function_coverage=1 00:04:12.049 --rc genhtml_legend=1 00:04:12.049 --rc geninfo_all_blocks=1 00:04:12.049 --rc geninfo_unexecuted_blocks=1 00:04:12.049 00:04:12.049 ' 00:04:12.049 11:42:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56905 00:04:12.049 11:42:38 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:12.049 11:42:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.049 11:42:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56905 00:04:12.049 11:42:38 rpc -- common/autotest_common.sh@835 -- # '[' -z 56905 ']' 00:04:12.049 11:42:38 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.049 11:42:38 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.049 11:42:38 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.049 11:42:38 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.049 11:42:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.049 [2024-11-27 11:42:38.386334] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:12.049 [2024-11-27 11:42:38.386809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56905 ] 00:04:12.309 [2024-11-27 11:42:38.560092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.309 [2024-11-27 11:42:38.666201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:12.309 [2024-11-27 11:42:38.666257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56905' to capture a snapshot of events at runtime. 00:04:12.309 [2024-11-27 11:42:38.666267] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:12.309 [2024-11-27 11:42:38.666277] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:12.309 [2024-11-27 11:42:38.666283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56905 for offline analysis/debug. 00:04:12.309 [2024-11-27 11:42:38.667439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.248 11:42:39 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.248 11:42:39 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:13.248 11:42:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:13.248 11:42:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:13.248 11:42:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:13.248 11:42:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:13.248 11:42:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.248 11:42:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.248 11:42:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.248 ************************************ 00:04:13.248 START TEST rpc_integrity 00:04:13.248 ************************************ 00:04:13.248 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:13.248 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:13.248 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.248 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.248 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.248 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:13.248 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:13.248 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:13.248 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:13.248 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.248 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.249 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.249 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:13.249 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:13.249 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.249 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.249 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.249 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:13.249 { 00:04:13.249 "name": "Malloc0", 00:04:13.249 "aliases": [ 00:04:13.249 "1a7430ee-2196-46b3-980d-5bd1d40c857d" 00:04:13.249 ], 00:04:13.249 "product_name": "Malloc disk", 00:04:13.249 "block_size": 512, 00:04:13.249 "num_blocks": 16384, 00:04:13.249 "uuid": "1a7430ee-2196-46b3-980d-5bd1d40c857d", 00:04:13.249 "assigned_rate_limits": { 00:04:13.249 "rw_ios_per_sec": 0, 00:04:13.249 "rw_mbytes_per_sec": 0, 00:04:13.249 "r_mbytes_per_sec": 0, 00:04:13.249 "w_mbytes_per_sec": 0 00:04:13.249 }, 00:04:13.249 "claimed": false, 00:04:13.249 "zoned": false, 00:04:13.249 "supported_io_types": { 00:04:13.249 "read": true, 00:04:13.249 "write": true, 00:04:13.249 "unmap": true, 00:04:13.249 "flush": true, 00:04:13.249 "reset": true, 00:04:13.249 "nvme_admin": false, 00:04:13.249 "nvme_io": false, 00:04:13.249 "nvme_io_md": false, 00:04:13.249 "write_zeroes": true, 00:04:13.249 "zcopy": true, 00:04:13.249 "get_zone_info": false, 00:04:13.249 "zone_management": false, 00:04:13.249 "zone_append": false, 00:04:13.249 "compare": false, 00:04:13.249 "compare_and_write": false, 00:04:13.249 "abort": true, 00:04:13.249 "seek_hole": false, 00:04:13.249 "seek_data": false, 00:04:13.249 "copy": true, 00:04:13.249 "nvme_iov_md": false 00:04:13.249 }, 00:04:13.249 "memory_domains": [ 00:04:13.249 { 00:04:13.249 "dma_device_id": "system", 00:04:13.249 "dma_device_type": 1 00:04:13.249 }, 00:04:13.249 { 00:04:13.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.249 "dma_device_type": 2 00:04:13.249 } 00:04:13.249 ], 00:04:13.249 "driver_specific": {} 00:04:13.249 } 00:04:13.249 ]' 00:04:13.249 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:13.509 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:13.509 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:13.509 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.509 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.509 [2024-11-27 11:42:39.671950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:13.509 [2024-11-27 11:42:39.672001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:13.509 [2024-11-27 11:42:39.672026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:13.509 [2024-11-27 11:42:39.672040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:13.509 [2024-11-27 11:42:39.674477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:13.509 [2024-11-27 11:42:39.674514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:13.509 Passthru0 00:04:13.509 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.509 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:13.509 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.509 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.509 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.509 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:13.509 { 00:04:13.509 "name": "Malloc0", 00:04:13.509 "aliases": [ 00:04:13.509 "1a7430ee-2196-46b3-980d-5bd1d40c857d" 00:04:13.509 ], 00:04:13.509 "product_name": "Malloc disk", 00:04:13.509 "block_size": 512, 00:04:13.509 "num_blocks": 16384, 00:04:13.509 "uuid": "1a7430ee-2196-46b3-980d-5bd1d40c857d", 00:04:13.509 "assigned_rate_limits": { 00:04:13.509 "rw_ios_per_sec": 0, 00:04:13.509 "rw_mbytes_per_sec": 0, 00:04:13.509 "r_mbytes_per_sec": 0, 00:04:13.509 "w_mbytes_per_sec": 0 00:04:13.509 }, 00:04:13.509 "claimed": true, 00:04:13.509 "claim_type": "exclusive_write", 00:04:13.509 "zoned": false, 00:04:13.509 "supported_io_types": { 00:04:13.509 "read": true, 00:04:13.509 "write": true, 00:04:13.509 "unmap": true, 00:04:13.509 "flush": true, 00:04:13.509 "reset": true, 00:04:13.509 "nvme_admin": false, 00:04:13.509 "nvme_io": false, 00:04:13.509 "nvme_io_md": false, 00:04:13.509 "write_zeroes": true, 00:04:13.509 "zcopy": true, 00:04:13.509 "get_zone_info": false, 00:04:13.509 "zone_management": false, 00:04:13.509 "zone_append": false, 00:04:13.509 "compare": false, 00:04:13.509 "compare_and_write": false, 00:04:13.509 "abort": true, 00:04:13.509 "seek_hole": false, 00:04:13.509 "seek_data": false, 00:04:13.509 "copy": true, 00:04:13.509 "nvme_iov_md": false 00:04:13.509 }, 00:04:13.509 "memory_domains": [ 00:04:13.509 { 00:04:13.509 "dma_device_id": "system", 00:04:13.509 "dma_device_type": 1 00:04:13.509 }, 00:04:13.509 { 00:04:13.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.509 "dma_device_type": 2 00:04:13.509 } 00:04:13.509 ], 00:04:13.509 "driver_specific": {} 00:04:13.509 }, 00:04:13.509 { 00:04:13.509 "name": "Passthru0", 00:04:13.509 "aliases": [ 00:04:13.509 "4eed8b85-9c8e-58d7-8cc3-122c6b255e59" 00:04:13.509 ], 00:04:13.509 "product_name": "passthru", 00:04:13.509 "block_size": 512, 00:04:13.509 "num_blocks": 16384, 00:04:13.509 "uuid": "4eed8b85-9c8e-58d7-8cc3-122c6b255e59", 00:04:13.509 "assigned_rate_limits": { 00:04:13.509 "rw_ios_per_sec": 0, 00:04:13.509 "rw_mbytes_per_sec": 0, 00:04:13.509 "r_mbytes_per_sec": 0, 00:04:13.509 "w_mbytes_per_sec": 0 00:04:13.509 }, 00:04:13.509 "claimed": false, 00:04:13.509 "zoned": false, 00:04:13.509 "supported_io_types": { 00:04:13.509 "read": true, 00:04:13.509 "write": true, 00:04:13.509 "unmap": true, 00:04:13.509 "flush": true, 00:04:13.509 "reset": true, 00:04:13.509 "nvme_admin": false, 00:04:13.509 "nvme_io": false, 00:04:13.509 "nvme_io_md": false, 00:04:13.509 "write_zeroes": true, 00:04:13.509 "zcopy": true, 00:04:13.509 "get_zone_info": false, 00:04:13.509 "zone_management": false, 00:04:13.509 "zone_append": false, 00:04:13.509 "compare": false, 00:04:13.509 "compare_and_write": false, 00:04:13.509 "abort": true, 00:04:13.509 "seek_hole": false, 00:04:13.509 "seek_data": false, 00:04:13.509 "copy": true, 00:04:13.509 "nvme_iov_md": false 00:04:13.509 }, 00:04:13.509 "memory_domains": [ 00:04:13.509 { 00:04:13.509 "dma_device_id": "system", 00:04:13.509 "dma_device_type": 1 00:04:13.509 }, 00:04:13.509 { 00:04:13.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.509 "dma_device_type": 2 00:04:13.509 } 00:04:13.509 ], 00:04:13.509 "driver_specific": { 00:04:13.509 "passthru": { 00:04:13.509 "name": "Passthru0", 00:04:13.509 "base_bdev_name": "Malloc0" 00:04:13.509 } 00:04:13.509 } 00:04:13.509 } 00:04:13.509 ]' 00:04:13.509 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:13.509 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:13.509 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:13.509 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.509 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.509 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.509 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:13.509 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.509 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.509 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.509 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:13.509 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.509 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.509 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.509 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:13.509 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:13.509 11:42:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:13.509 00:04:13.509 real 0m0.356s 00:04:13.509 user 0m0.206s 00:04:13.509 sys 0m0.048s 00:04:13.509 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.509 11:42:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.509 ************************************ 00:04:13.509 END TEST rpc_integrity 00:04:13.509 ************************************ 00:04:13.769 11:42:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:13.769 11:42:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.769 11:42:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.769 11:42:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.769 ************************************ 00:04:13.769 START TEST rpc_plugins 00:04:13.769 ************************************ 00:04:13.769 11:42:39 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:13.769 11:42:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:13.769 11:42:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.769 11:42:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.769 11:42:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.769 11:42:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:13.769 11:42:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:13.769 11:42:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.769 11:42:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.769 11:42:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.769 11:42:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:13.769 { 00:04:13.769 "name": "Malloc1", 00:04:13.769 "aliases": [ 00:04:13.769 "729c18d6-9422-46e2-8dd5-347f5ce5fd58" 00:04:13.769 ], 00:04:13.769 "product_name": "Malloc disk", 00:04:13.769 "block_size": 4096, 00:04:13.769 "num_blocks": 256, 00:04:13.769 "uuid": "729c18d6-9422-46e2-8dd5-347f5ce5fd58", 00:04:13.769 "assigned_rate_limits": { 00:04:13.769 "rw_ios_per_sec": 0, 00:04:13.769 "rw_mbytes_per_sec": 0, 00:04:13.769 "r_mbytes_per_sec": 0, 00:04:13.769 "w_mbytes_per_sec": 0 00:04:13.769 }, 00:04:13.769 "claimed": false, 00:04:13.769 "zoned": false, 00:04:13.769 "supported_io_types": { 00:04:13.769 "read": true, 00:04:13.769 "write": true, 00:04:13.769 "unmap": true, 00:04:13.769 "flush": true, 00:04:13.769 "reset": true, 00:04:13.769 "nvme_admin": false, 00:04:13.769 "nvme_io": false, 00:04:13.769 "nvme_io_md": false, 00:04:13.769 "write_zeroes": true, 00:04:13.769 "zcopy": true, 00:04:13.769 "get_zone_info": false, 00:04:13.769 "zone_management": false, 00:04:13.769 "zone_append": false, 00:04:13.769 "compare": false, 00:04:13.769 "compare_and_write": false, 00:04:13.769 "abort": true, 00:04:13.769 "seek_hole": false, 00:04:13.769 "seek_data": false, 00:04:13.769 "copy": true, 00:04:13.769 "nvme_iov_md": false 00:04:13.769 }, 00:04:13.769 "memory_domains": [ 00:04:13.769 { 00:04:13.769 "dma_device_id": "system", 00:04:13.769 "dma_device_type": 1 00:04:13.769 }, 00:04:13.769 { 00:04:13.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.769 "dma_device_type": 2 00:04:13.769 } 00:04:13.769 ], 00:04:13.769 "driver_specific": {} 00:04:13.769 } 00:04:13.769 ]' 00:04:13.769 11:42:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:13.769 11:42:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:13.769 11:42:40 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:13.769 11:42:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.769 11:42:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.769 11:42:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.769 11:42:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:13.769 11:42:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.769 11:42:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.769 11:42:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.769 11:42:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:13.769 11:42:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:13.769 11:42:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:13.769 00:04:13.769 real 0m0.175s 00:04:13.769 user 0m0.104s 00:04:13.769 sys 0m0.024s 00:04:13.769 11:42:40 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.769 11:42:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.769 ************************************ 00:04:13.769 END TEST rpc_plugins 00:04:13.769 ************************************ 00:04:13.770 11:42:40 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:13.770 11:42:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.770 11:42:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.770 11:42:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.029 ************************************ 00:04:14.029 START TEST rpc_trace_cmd_test 00:04:14.029 ************************************ 00:04:14.029 11:42:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:14.029 11:42:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:14.029 11:42:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:14.029 11:42:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.029 11:42:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.029 11:42:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.029 11:42:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:14.029 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56905", 00:04:14.029 "tpoint_group_mask": "0x8", 00:04:14.029 "iscsi_conn": { 00:04:14.029 "mask": "0x2", 00:04:14.029 "tpoint_mask": "0x0" 00:04:14.029 }, 00:04:14.029 "scsi": { 00:04:14.029 "mask": "0x4", 00:04:14.029 "tpoint_mask": "0x0" 00:04:14.029 }, 00:04:14.029 "bdev": { 00:04:14.029 "mask": "0x8", 00:04:14.029 "tpoint_mask": "0xffffffffffffffff" 00:04:14.029 }, 00:04:14.029 "nvmf_rdma": { 00:04:14.029 "mask": "0x10", 00:04:14.029 "tpoint_mask": "0x0" 00:04:14.029 }, 00:04:14.029 "nvmf_tcp": { 00:04:14.029 "mask": "0x20", 00:04:14.029 "tpoint_mask": "0x0" 00:04:14.029 }, 00:04:14.029 "ftl": { 00:04:14.029 "mask": "0x40", 00:04:14.029 "tpoint_mask": "0x0" 00:04:14.029 }, 00:04:14.029 "blobfs": { 00:04:14.029 "mask": "0x80", 00:04:14.029 "tpoint_mask": "0x0" 00:04:14.029 }, 00:04:14.029 "dsa": { 00:04:14.029 "mask": "0x200", 00:04:14.029 "tpoint_mask": "0x0" 00:04:14.029 }, 00:04:14.029 "thread": { 00:04:14.029 "mask": "0x400", 00:04:14.029 "tpoint_mask": "0x0" 00:04:14.029 }, 00:04:14.029 "nvme_pcie": { 00:04:14.029 "mask": "0x800", 00:04:14.029 "tpoint_mask": "0x0" 00:04:14.029 }, 00:04:14.029 "iaa": { 00:04:14.029 "mask": "0x1000", 00:04:14.029 "tpoint_mask": "0x0" 00:04:14.029 }, 00:04:14.029 "nvme_tcp": { 00:04:14.029 "mask": "0x2000", 00:04:14.029 "tpoint_mask": "0x0" 00:04:14.029 }, 00:04:14.029 "bdev_nvme": { 00:04:14.029 "mask": "0x4000", 00:04:14.029 "tpoint_mask": "0x0" 00:04:14.029 }, 00:04:14.029 "sock": { 00:04:14.029 "mask": "0x8000", 00:04:14.029 "tpoint_mask": "0x0" 00:04:14.029 }, 00:04:14.029 "blob": { 00:04:14.029 "mask": "0x10000", 00:04:14.029 "tpoint_mask": "0x0" 00:04:14.029 }, 00:04:14.029 "bdev_raid": { 00:04:14.029 "mask": "0x20000", 00:04:14.029 "tpoint_mask": "0x0" 00:04:14.029 }, 00:04:14.029 "scheduler": { 00:04:14.029 "mask": "0x40000", 00:04:14.029 "tpoint_mask": "0x0" 00:04:14.029 } 00:04:14.029 }' 00:04:14.029 11:42:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:14.029 11:42:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:14.029 11:42:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:14.029 11:42:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:14.029 11:42:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:14.029 11:42:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:14.030 11:42:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:14.030 11:42:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:14.030 11:42:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:14.030 11:42:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:14.030 00:04:14.030 real 0m0.235s 00:04:14.030 user 0m0.196s 00:04:14.030 sys 0m0.031s 00:04:14.030 11:42:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.030 11:42:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.030 ************************************ 00:04:14.030 END TEST rpc_trace_cmd_test 00:04:14.030 ************************************ 00:04:14.290 11:42:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:14.290 11:42:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:14.290 11:42:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:14.290 11:42:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.290 11:42:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.290 11:42:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.290 ************************************ 00:04:14.290 START TEST rpc_daemon_integrity 00:04:14.290 ************************************ 00:04:14.290 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:14.290 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:14.290 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.290 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.290 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.290 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:14.290 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:14.290 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:14.290 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:14.290 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.290 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.290 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.290 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:14.290 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:14.290 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.290 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.290 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.290 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:14.290 { 00:04:14.290 "name": "Malloc2", 00:04:14.290 "aliases": [ 00:04:14.290 "8326401e-7998-4078-9f8d-ead6921abc42" 00:04:14.290 ], 00:04:14.290 "product_name": "Malloc disk", 00:04:14.290 "block_size": 512, 00:04:14.290 "num_blocks": 16384, 00:04:14.290 "uuid": "8326401e-7998-4078-9f8d-ead6921abc42", 00:04:14.290 "assigned_rate_limits": { 00:04:14.290 "rw_ios_per_sec": 0, 00:04:14.290 "rw_mbytes_per_sec": 0, 00:04:14.290 "r_mbytes_per_sec": 0, 00:04:14.290 "w_mbytes_per_sec": 0 00:04:14.290 }, 00:04:14.290 "claimed": false, 00:04:14.290 "zoned": false, 00:04:14.290 "supported_io_types": { 00:04:14.290 "read": true, 00:04:14.290 "write": true, 00:04:14.290 "unmap": true, 00:04:14.290 "flush": true, 00:04:14.290 "reset": true, 00:04:14.290 "nvme_admin": false, 00:04:14.290 "nvme_io": false, 00:04:14.290 "nvme_io_md": false, 00:04:14.290 "write_zeroes": true, 00:04:14.290 "zcopy": true, 00:04:14.290 "get_zone_info": false, 00:04:14.290 "zone_management": false, 00:04:14.290 "zone_append": false, 00:04:14.290 "compare": false, 00:04:14.290 "compare_and_write": false, 00:04:14.290 "abort": true, 00:04:14.290 "seek_hole": false, 00:04:14.290 "seek_data": false, 00:04:14.290 "copy": true, 00:04:14.290 "nvme_iov_md": false 00:04:14.290 }, 00:04:14.290 "memory_domains": [ 00:04:14.290 { 00:04:14.290 "dma_device_id": "system", 00:04:14.291 "dma_device_type": 1 00:04:14.291 }, 00:04:14.291 { 00:04:14.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.291 "dma_device_type": 2 00:04:14.291 } 00:04:14.291 ], 00:04:14.291 "driver_specific": {} 00:04:14.291 } 00:04:14.291 ]' 00:04:14.291 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:14.291 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:14.291 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:14.291 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.291 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.291 [2024-11-27 11:42:40.614319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:14.291 [2024-11-27 11:42:40.614369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:14.291 [2024-11-27 11:42:40.614391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:14.291 [2024-11-27 11:42:40.614401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:14.291 [2024-11-27 11:42:40.616619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:14.291 [2024-11-27 11:42:40.616656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:14.291 Passthru0 00:04:14.291 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.291 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:14.291 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.291 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.291 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.291 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:14.291 { 00:04:14.291 "name": "Malloc2", 00:04:14.291 "aliases": [ 00:04:14.291 "8326401e-7998-4078-9f8d-ead6921abc42" 00:04:14.291 ], 00:04:14.291 "product_name": "Malloc disk", 00:04:14.291 "block_size": 512, 00:04:14.291 "num_blocks": 16384, 00:04:14.291 "uuid": "8326401e-7998-4078-9f8d-ead6921abc42", 00:04:14.291 "assigned_rate_limits": { 00:04:14.291 "rw_ios_per_sec": 0, 00:04:14.291 "rw_mbytes_per_sec": 0, 00:04:14.291 "r_mbytes_per_sec": 0, 00:04:14.291 "w_mbytes_per_sec": 0 00:04:14.291 }, 00:04:14.291 "claimed": true, 00:04:14.291 "claim_type": "exclusive_write", 00:04:14.291 "zoned": false, 00:04:14.291 "supported_io_types": { 00:04:14.291 "read": true, 00:04:14.291 "write": true, 00:04:14.291 "unmap": true, 00:04:14.291 "flush": true, 00:04:14.291 "reset": true, 00:04:14.291 "nvme_admin": false, 00:04:14.291 "nvme_io": false, 00:04:14.291 "nvme_io_md": false, 00:04:14.291 "write_zeroes": true, 00:04:14.291 "zcopy": true, 00:04:14.291 "get_zone_info": false, 00:04:14.291 "zone_management": false, 00:04:14.291 "zone_append": false, 00:04:14.291 "compare": false, 00:04:14.291 "compare_and_write": false, 00:04:14.291 "abort": true, 00:04:14.291 "seek_hole": false, 00:04:14.291 "seek_data": false, 00:04:14.291 "copy": true, 00:04:14.291 "nvme_iov_md": false 00:04:14.291 }, 00:04:14.291 "memory_domains": [ 00:04:14.291 { 00:04:14.291 "dma_device_id": "system", 00:04:14.291 "dma_device_type": 1 00:04:14.291 }, 00:04:14.291 { 00:04:14.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.291 "dma_device_type": 2 00:04:14.291 } 00:04:14.291 ], 00:04:14.291 "driver_specific": {} 00:04:14.291 }, 00:04:14.291 { 00:04:14.291 "name": "Passthru0", 00:04:14.291 "aliases": [ 00:04:14.291 "d48e7cef-d0aa-504a-9197-ef482099378f" 00:04:14.291 ], 00:04:14.291 "product_name": "passthru", 00:04:14.291 "block_size": 512, 00:04:14.291 "num_blocks": 16384, 00:04:14.291 "uuid": "d48e7cef-d0aa-504a-9197-ef482099378f", 00:04:14.291 "assigned_rate_limits": { 00:04:14.291 "rw_ios_per_sec": 0, 00:04:14.291 "rw_mbytes_per_sec": 0, 00:04:14.291 "r_mbytes_per_sec": 0, 00:04:14.291 "w_mbytes_per_sec": 0 00:04:14.291 }, 00:04:14.291 "claimed": false, 00:04:14.291 "zoned": false, 00:04:14.291 "supported_io_types": { 00:04:14.291 "read": true, 00:04:14.291 "write": true, 00:04:14.291 "unmap": true, 00:04:14.291 "flush": true, 00:04:14.291 "reset": true, 00:04:14.291 "nvme_admin": false, 00:04:14.291 "nvme_io": false, 00:04:14.291 "nvme_io_md": false, 00:04:14.291 "write_zeroes": true, 00:04:14.291 "zcopy": true, 00:04:14.291 "get_zone_info": false, 00:04:14.291 "zone_management": false, 00:04:14.291 "zone_append": false, 00:04:14.291 "compare": false, 00:04:14.291 "compare_and_write": false, 00:04:14.291 "abort": true, 00:04:14.291 "seek_hole": false, 00:04:14.291 "seek_data": false, 00:04:14.291 "copy": true, 00:04:14.291 "nvme_iov_md": false 00:04:14.291 }, 00:04:14.291 "memory_domains": [ 00:04:14.291 { 00:04:14.291 "dma_device_id": "system", 00:04:14.291 "dma_device_type": 1 00:04:14.291 }, 00:04:14.291 { 00:04:14.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.291 "dma_device_type": 2 00:04:14.291 } 00:04:14.291 ], 00:04:14.291 "driver_specific": { 00:04:14.291 "passthru": { 00:04:14.291 "name": "Passthru0", 00:04:14.291 "base_bdev_name": "Malloc2" 00:04:14.291 } 00:04:14.291 } 00:04:14.291 } 00:04:14.291 ]' 00:04:14.291 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:14.554 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:14.554 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:14.554 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.554 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.554 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.554 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:14.554 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.554 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.554 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.554 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:14.554 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.554 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.554 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.554 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:14.554 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:14.554 11:42:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:14.554 00:04:14.554 real 0m0.352s 00:04:14.554 user 0m0.196s 00:04:14.554 sys 0m0.054s 00:04:14.554 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.554 11:42:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.554 ************************************ 00:04:14.554 END TEST rpc_daemon_integrity 00:04:14.554 ************************************ 00:04:14.554 11:42:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:14.554 11:42:40 rpc -- rpc/rpc.sh@84 -- # killprocess 56905 00:04:14.554 11:42:40 rpc -- common/autotest_common.sh@954 -- # '[' -z 56905 ']' 00:04:14.554 11:42:40 rpc -- common/autotest_common.sh@958 -- # kill -0 56905 00:04:14.554 11:42:40 rpc -- common/autotest_common.sh@959 -- # uname 00:04:14.554 11:42:40 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.554 11:42:40 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56905 00:04:14.554 11:42:40 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.554 11:42:40 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.554 killing process with pid 56905 00:04:14.554 11:42:40 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56905' 00:04:14.554 11:42:40 rpc -- common/autotest_common.sh@973 -- # kill 56905 00:04:14.554 11:42:40 rpc -- common/autotest_common.sh@978 -- # wait 56905 00:04:17.092 00:04:17.092 real 0m5.119s 00:04:17.092 user 0m5.663s 00:04:17.092 sys 0m0.913s 00:04:17.092 11:42:43 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.092 ************************************ 00:04:17.092 END TEST rpc 00:04:17.092 ************************************ 00:04:17.092 11:42:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.092 11:42:43 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:17.092 11:42:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.092 11:42:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.092 11:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:17.092 ************************************ 00:04:17.092 START TEST skip_rpc 00:04:17.092 ************************************ 00:04:17.092 11:42:43 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:17.092 * Looking for test storage... 00:04:17.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:17.092 11:42:43 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:17.092 11:42:43 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:17.092 11:42:43 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:17.092 11:42:43 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.092 11:42:43 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:17.092 11:42:43 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.092 11:42:43 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:17.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.092 --rc genhtml_branch_coverage=1 00:04:17.092 --rc genhtml_function_coverage=1 00:04:17.092 --rc genhtml_legend=1 00:04:17.092 --rc geninfo_all_blocks=1 00:04:17.092 --rc geninfo_unexecuted_blocks=1 00:04:17.092 00:04:17.092 ' 00:04:17.092 11:42:43 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:17.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.092 --rc genhtml_branch_coverage=1 00:04:17.092 --rc genhtml_function_coverage=1 00:04:17.092 --rc genhtml_legend=1 00:04:17.092 --rc geninfo_all_blocks=1 00:04:17.092 --rc geninfo_unexecuted_blocks=1 00:04:17.092 00:04:17.092 ' 00:04:17.092 11:42:43 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:17.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.092 --rc genhtml_branch_coverage=1 00:04:17.092 --rc genhtml_function_coverage=1 00:04:17.092 --rc genhtml_legend=1 00:04:17.092 --rc geninfo_all_blocks=1 00:04:17.092 --rc geninfo_unexecuted_blocks=1 00:04:17.092 00:04:17.092 ' 00:04:17.092 11:42:43 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:17.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.092 --rc genhtml_branch_coverage=1 00:04:17.092 --rc genhtml_function_coverage=1 00:04:17.092 --rc genhtml_legend=1 00:04:17.092 --rc geninfo_all_blocks=1 00:04:17.092 --rc geninfo_unexecuted_blocks=1 00:04:17.092 00:04:17.092 ' 00:04:17.092 11:42:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:17.092 11:42:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:17.092 11:42:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:17.092 11:42:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.092 11:42:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.092 11:42:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.352 ************************************ 00:04:17.352 START TEST skip_rpc 00:04:17.352 ************************************ 00:04:17.352 11:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:17.352 11:42:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57134 00:04:17.352 11:42:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:17.352 11:42:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.352 11:42:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:17.352 [2024-11-27 11:42:43.575451] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:17.352 [2024-11-27 11:42:43.575573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57134 ] 00:04:17.611 [2024-11-27 11:42:43.748592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.611 [2024-11-27 11:42:43.854397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57134 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57134 ']' 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57134 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57134 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57134' 00:04:22.889 killing process with pid 57134 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57134 00:04:22.889 11:42:48 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57134 00:04:24.798 00:04:24.798 real 0m7.361s 00:04:24.798 user 0m6.898s 00:04:24.798 sys 0m0.380s 00:04:24.798 11:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.798 11:42:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.798 ************************************ 00:04:24.798 END TEST skip_rpc 00:04:24.798 ************************************ 00:04:24.798 11:42:50 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:24.798 11:42:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.798 11:42:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.798 11:42:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.798 ************************************ 00:04:24.798 START TEST skip_rpc_with_json 00:04:24.798 ************************************ 00:04:24.798 11:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:24.798 11:42:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:24.798 11:42:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57239 00:04:24.798 11:42:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:24.798 11:42:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.798 11:42:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57239 00:04:24.798 11:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57239 ']' 00:04:24.798 11:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.798 11:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.798 11:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.798 11:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.798 11:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:24.798 [2024-11-27 11:42:51.003886] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:24.798 [2024-11-27 11:42:51.003999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57239 ] 00:04:24.798 [2024-11-27 11:42:51.178719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.058 [2024-11-27 11:42:51.291929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.997 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.997 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:25.997 11:42:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:25.997 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.997 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.997 [2024-11-27 11:42:52.116616] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:25.997 request: 00:04:25.997 { 00:04:25.997 "trtype": "tcp", 00:04:25.997 "method": "nvmf_get_transports", 00:04:25.997 "req_id": 1 00:04:25.997 } 00:04:25.997 Got JSON-RPC error response 00:04:25.997 response: 00:04:25.997 { 00:04:25.997 "code": -19, 00:04:25.997 "message": "No such device" 00:04:25.997 } 00:04:25.997 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:25.997 11:42:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:25.997 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.997 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.997 [2024-11-27 11:42:52.128706] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:25.997 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.997 11:42:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:25.997 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.997 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.997 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.997 11:42:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:25.997 { 00:04:25.997 "subsystems": [ 00:04:25.997 { 00:04:25.997 "subsystem": "fsdev", 00:04:25.997 "config": [ 00:04:25.997 { 00:04:25.997 "method": "fsdev_set_opts", 00:04:25.997 "params": { 00:04:25.997 "fsdev_io_pool_size": 65535, 00:04:25.997 "fsdev_io_cache_size": 256 00:04:25.997 } 00:04:25.997 } 00:04:25.997 ] 00:04:25.997 }, 00:04:25.997 { 00:04:25.997 "subsystem": "keyring", 00:04:25.997 "config": [] 00:04:25.997 }, 00:04:25.997 { 00:04:25.997 "subsystem": "iobuf", 00:04:25.997 "config": [ 00:04:25.997 { 00:04:25.997 "method": "iobuf_set_options", 00:04:25.997 "params": { 00:04:25.997 "small_pool_count": 8192, 00:04:25.997 "large_pool_count": 1024, 00:04:25.997 "small_bufsize": 8192, 00:04:25.997 "large_bufsize": 135168, 00:04:25.997 "enable_numa": false 00:04:25.997 } 00:04:25.997 } 00:04:25.997 ] 00:04:25.997 }, 00:04:25.997 { 00:04:25.997 "subsystem": "sock", 00:04:25.997 "config": [ 00:04:25.997 { 00:04:25.997 "method": "sock_set_default_impl", 00:04:25.997 "params": { 00:04:25.997 "impl_name": "posix" 00:04:25.997 } 00:04:25.997 }, 00:04:25.997 { 00:04:25.997 "method": "sock_impl_set_options", 00:04:25.997 "params": { 00:04:25.997 "impl_name": "ssl", 00:04:25.997 "recv_buf_size": 4096, 00:04:25.997 "send_buf_size": 4096, 00:04:25.997 "enable_recv_pipe": true, 00:04:25.997 "enable_quickack": false, 00:04:25.997 "enable_placement_id": 0, 00:04:25.997 "enable_zerocopy_send_server": true, 00:04:25.997 "enable_zerocopy_send_client": false, 00:04:25.997 "zerocopy_threshold": 0, 00:04:25.997 "tls_version": 0, 00:04:25.997 "enable_ktls": false 00:04:25.998 } 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "method": "sock_impl_set_options", 00:04:25.998 "params": { 00:04:25.998 "impl_name": "posix", 00:04:25.998 "recv_buf_size": 2097152, 00:04:25.998 "send_buf_size": 2097152, 00:04:25.998 "enable_recv_pipe": true, 00:04:25.998 "enable_quickack": false, 00:04:25.998 "enable_placement_id": 0, 00:04:25.998 "enable_zerocopy_send_server": true, 00:04:25.998 "enable_zerocopy_send_client": false, 00:04:25.998 "zerocopy_threshold": 0, 00:04:25.998 "tls_version": 0, 00:04:25.998 "enable_ktls": false 00:04:25.998 } 00:04:25.998 } 00:04:25.998 ] 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "subsystem": "vmd", 00:04:25.998 "config": [] 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "subsystem": "accel", 00:04:25.998 "config": [ 00:04:25.998 { 00:04:25.998 "method": "accel_set_options", 00:04:25.998 "params": { 00:04:25.998 "small_cache_size": 128, 00:04:25.998 "large_cache_size": 16, 00:04:25.998 "task_count": 2048, 00:04:25.998 "sequence_count": 2048, 00:04:25.998 "buf_count": 2048 00:04:25.998 } 00:04:25.998 } 00:04:25.998 ] 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "subsystem": "bdev", 00:04:25.998 "config": [ 00:04:25.998 { 00:04:25.998 "method": "bdev_set_options", 00:04:25.998 "params": { 00:04:25.998 "bdev_io_pool_size": 65535, 00:04:25.998 "bdev_io_cache_size": 256, 00:04:25.998 "bdev_auto_examine": true, 00:04:25.998 "iobuf_small_cache_size": 128, 00:04:25.998 "iobuf_large_cache_size": 16 00:04:25.998 } 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "method": "bdev_raid_set_options", 00:04:25.998 "params": { 00:04:25.998 "process_window_size_kb": 1024, 00:04:25.998 "process_max_bandwidth_mb_sec": 0 00:04:25.998 } 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "method": "bdev_iscsi_set_options", 00:04:25.998 "params": { 00:04:25.998 "timeout_sec": 30 00:04:25.998 } 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "method": "bdev_nvme_set_options", 00:04:25.998 "params": { 00:04:25.998 "action_on_timeout": "none", 00:04:25.998 "timeout_us": 0, 00:04:25.998 "timeout_admin_us": 0, 00:04:25.998 "keep_alive_timeout_ms": 10000, 00:04:25.998 "arbitration_burst": 0, 00:04:25.998 "low_priority_weight": 0, 00:04:25.998 "medium_priority_weight": 0, 00:04:25.998 "high_priority_weight": 0, 00:04:25.998 "nvme_adminq_poll_period_us": 10000, 00:04:25.998 "nvme_ioq_poll_period_us": 0, 00:04:25.998 "io_queue_requests": 0, 00:04:25.998 "delay_cmd_submit": true, 00:04:25.998 "transport_retry_count": 4, 00:04:25.998 "bdev_retry_count": 3, 00:04:25.998 "transport_ack_timeout": 0, 00:04:25.998 "ctrlr_loss_timeout_sec": 0, 00:04:25.998 "reconnect_delay_sec": 0, 00:04:25.998 "fast_io_fail_timeout_sec": 0, 00:04:25.998 "disable_auto_failback": false, 00:04:25.998 "generate_uuids": false, 00:04:25.998 "transport_tos": 0, 00:04:25.998 "nvme_error_stat": false, 00:04:25.998 "rdma_srq_size": 0, 00:04:25.998 "io_path_stat": false, 00:04:25.998 "allow_accel_sequence": false, 00:04:25.998 "rdma_max_cq_size": 0, 00:04:25.998 "rdma_cm_event_timeout_ms": 0, 00:04:25.998 "dhchap_digests": [ 00:04:25.998 "sha256", 00:04:25.998 "sha384", 00:04:25.998 "sha512" 00:04:25.998 ], 00:04:25.998 "dhchap_dhgroups": [ 00:04:25.998 "null", 00:04:25.998 "ffdhe2048", 00:04:25.998 "ffdhe3072", 00:04:25.998 "ffdhe4096", 00:04:25.998 "ffdhe6144", 00:04:25.998 "ffdhe8192" 00:04:25.998 ] 00:04:25.998 } 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "method": "bdev_nvme_set_hotplug", 00:04:25.998 "params": { 00:04:25.998 "period_us": 100000, 00:04:25.998 "enable": false 00:04:25.998 } 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "method": "bdev_wait_for_examine" 00:04:25.998 } 00:04:25.998 ] 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "subsystem": "scsi", 00:04:25.998 "config": null 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "subsystem": "scheduler", 00:04:25.998 "config": [ 00:04:25.998 { 00:04:25.998 "method": "framework_set_scheduler", 00:04:25.998 "params": { 00:04:25.998 "name": "static" 00:04:25.998 } 00:04:25.998 } 00:04:25.998 ] 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "subsystem": "vhost_scsi", 00:04:25.998 "config": [] 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "subsystem": "vhost_blk", 00:04:25.998 "config": [] 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "subsystem": "ublk", 00:04:25.998 "config": [] 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "subsystem": "nbd", 00:04:25.998 "config": [] 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "subsystem": "nvmf", 00:04:25.998 "config": [ 00:04:25.998 { 00:04:25.998 "method": "nvmf_set_config", 00:04:25.998 "params": { 00:04:25.998 "discovery_filter": "match_any", 00:04:25.998 "admin_cmd_passthru": { 00:04:25.998 "identify_ctrlr": false 00:04:25.998 }, 00:04:25.998 "dhchap_digests": [ 00:04:25.998 "sha256", 00:04:25.998 "sha384", 00:04:25.998 "sha512" 00:04:25.998 ], 00:04:25.998 "dhchap_dhgroups": [ 00:04:25.998 "null", 00:04:25.998 "ffdhe2048", 00:04:25.998 "ffdhe3072", 00:04:25.998 "ffdhe4096", 00:04:25.998 "ffdhe6144", 00:04:25.998 "ffdhe8192" 00:04:25.998 ] 00:04:25.998 } 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "method": "nvmf_set_max_subsystems", 00:04:25.998 "params": { 00:04:25.998 "max_subsystems": 1024 00:04:25.998 } 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "method": "nvmf_set_crdt", 00:04:25.998 "params": { 00:04:25.998 "crdt1": 0, 00:04:25.998 "crdt2": 0, 00:04:25.998 "crdt3": 0 00:04:25.998 } 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "method": "nvmf_create_transport", 00:04:25.998 "params": { 00:04:25.998 "trtype": "TCP", 00:04:25.998 "max_queue_depth": 128, 00:04:25.998 "max_io_qpairs_per_ctrlr": 127, 00:04:25.998 "in_capsule_data_size": 4096, 00:04:25.998 "max_io_size": 131072, 00:04:25.998 "io_unit_size": 131072, 00:04:25.998 "max_aq_depth": 128, 00:04:25.998 "num_shared_buffers": 511, 00:04:25.998 "buf_cache_size": 4294967295, 00:04:25.998 "dif_insert_or_strip": false, 00:04:25.998 "zcopy": false, 00:04:25.998 "c2h_success": true, 00:04:25.998 "sock_priority": 0, 00:04:25.998 "abort_timeout_sec": 1, 00:04:25.998 "ack_timeout": 0, 00:04:25.998 "data_wr_pool_size": 0 00:04:25.998 } 00:04:25.998 } 00:04:25.998 ] 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "subsystem": "iscsi", 00:04:25.998 "config": [ 00:04:25.998 { 00:04:25.998 "method": "iscsi_set_options", 00:04:25.998 "params": { 00:04:25.998 "node_base": "iqn.2016-06.io.spdk", 00:04:25.998 "max_sessions": 128, 00:04:25.998 "max_connections_per_session": 2, 00:04:25.998 "max_queue_depth": 64, 00:04:25.998 "default_time2wait": 2, 00:04:25.998 "default_time2retain": 20, 00:04:25.998 "first_burst_length": 8192, 00:04:25.998 "immediate_data": true, 00:04:25.998 "allow_duplicated_isid": false, 00:04:25.998 "error_recovery_level": 0, 00:04:25.998 "nop_timeout": 60, 00:04:25.998 "nop_in_interval": 30, 00:04:25.998 "disable_chap": false, 00:04:25.998 "require_chap": false, 00:04:25.998 "mutual_chap": false, 00:04:25.998 "chap_group": 0, 00:04:25.998 "max_large_datain_per_connection": 64, 00:04:25.998 "max_r2t_per_connection": 4, 00:04:25.998 "pdu_pool_size": 36864, 00:04:25.998 "immediate_data_pool_size": 16384, 00:04:25.998 "data_out_pool_size": 2048 00:04:25.998 } 00:04:25.998 } 00:04:25.998 ] 00:04:25.998 } 00:04:25.998 ] 00:04:25.998 } 00:04:25.998 11:42:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:25.998 11:42:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57239 00:04:25.998 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57239 ']' 00:04:25.998 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57239 00:04:25.998 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:25.998 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.998 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57239 00:04:25.998 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.998 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.998 killing process with pid 57239 00:04:25.998 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57239' 00:04:25.998 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57239 00:04:25.998 11:42:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57239 00:04:28.539 11:42:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57289 00:04:28.539 11:42:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:28.539 11:42:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:33.824 11:42:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57289 00:04:33.824 11:42:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57289 ']' 00:04:33.824 11:42:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57289 00:04:33.824 11:42:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:33.824 11:42:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.824 11:42:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57289 00:04:33.824 11:42:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.824 11:42:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.824 killing process with pid 57289 00:04:33.824 11:42:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57289' 00:04:33.824 11:42:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57289 00:04:33.824 11:42:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57289 00:04:35.737 11:43:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:35.737 11:43:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:35.737 00:04:35.737 real 0m11.114s 00:04:35.737 user 0m10.596s 00:04:35.737 sys 0m0.809s 00:04:35.737 11:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.737 11:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.737 ************************************ 00:04:35.737 END TEST skip_rpc_with_json 00:04:35.737 ************************************ 00:04:35.737 11:43:02 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:35.737 11:43:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.738 11:43:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.738 11:43:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.738 ************************************ 00:04:35.738 START TEST skip_rpc_with_delay 00:04:35.738 ************************************ 00:04:35.738 11:43:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:35.738 11:43:02 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:35.738 11:43:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:35.738 11:43:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:35.738 11:43:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.738 11:43:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.738 11:43:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.738 11:43:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.738 11:43:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.738 11:43:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.738 11:43:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.738 11:43:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:35.738 11:43:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:35.999 [2024-11-27 11:43:02.190945] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:35.999 11:43:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:35.999 11:43:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:35.999 11:43:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:35.999 11:43:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:35.999 00:04:35.999 real 0m0.169s 00:04:35.999 user 0m0.088s 00:04:35.999 sys 0m0.081s 00:04:35.999 11:43:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.999 11:43:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:35.999 ************************************ 00:04:35.999 END TEST skip_rpc_with_delay 00:04:35.999 ************************************ 00:04:35.999 11:43:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:35.999 11:43:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:35.999 11:43:02 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:35.999 11:43:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.999 11:43:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.999 11:43:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.999 ************************************ 00:04:35.999 START TEST exit_on_failed_rpc_init 00:04:35.999 ************************************ 00:04:35.999 11:43:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:35.999 11:43:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57423 00:04:35.999 11:43:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.999 11:43:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57423 00:04:35.999 11:43:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57423 ']' 00:04:35.999 11:43:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.999 11:43:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.999 11:43:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.999 11:43:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.999 11:43:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.260 [2024-11-27 11:43:02.435788] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:36.260 [2024-11-27 11:43:02.435916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57423 ] 00:04:36.260 [2024-11-27 11:43:02.608710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.519 [2024-11-27 11:43:02.724798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.460 11:43:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.460 11:43:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:37.460 11:43:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.460 11:43:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.460 11:43:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:37.460 11:43:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.460 11:43:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.460 11:43:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.460 11:43:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.460 11:43:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.460 11:43:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.460 11:43:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.460 11:43:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.460 11:43:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:37.460 11:43:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.460 [2024-11-27 11:43:03.648191] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:37.460 [2024-11-27 11:43:03.648302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57441 ] 00:04:37.460 [2024-11-27 11:43:03.819918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.720 [2024-11-27 11:43:03.933118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.720 [2024-11-27 11:43:03.933200] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:37.720 [2024-11-27 11:43:03.933214] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:37.720 [2024-11-27 11:43:03.933225] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:37.980 11:43:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:37.980 11:43:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:37.980 11:43:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:37.980 11:43:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:37.980 11:43:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:37.980 11:43:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:37.980 11:43:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:37.980 11:43:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57423 00:04:37.980 11:43:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57423 ']' 00:04:37.980 11:43:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57423 00:04:37.980 11:43:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:37.980 11:43:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.980 11:43:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57423 00:04:37.980 killing process with pid 57423 00:04:37.980 11:43:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.980 11:43:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.980 11:43:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57423' 00:04:37.980 11:43:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57423 00:04:37.980 11:43:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57423 00:04:40.521 00:04:40.521 real 0m4.299s 00:04:40.521 user 0m4.593s 00:04:40.521 sys 0m0.574s 00:04:40.521 11:43:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.521 11:43:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.521 ************************************ 00:04:40.521 END TEST exit_on_failed_rpc_init 00:04:40.521 ************************************ 00:04:40.521 11:43:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:40.521 00:04:40.521 real 0m23.443s 00:04:40.521 user 0m22.371s 00:04:40.521 sys 0m2.154s 00:04:40.521 11:43:06 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.521 11:43:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.521 ************************************ 00:04:40.521 END TEST skip_rpc 00:04:40.521 ************************************ 00:04:40.521 11:43:06 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:40.521 11:43:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.521 11:43:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.521 11:43:06 -- common/autotest_common.sh@10 -- # set +x 00:04:40.521 ************************************ 00:04:40.521 START TEST rpc_client 00:04:40.521 ************************************ 00:04:40.521 11:43:06 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:40.521 * Looking for test storage... 00:04:40.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:40.521 11:43:06 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:40.521 11:43:06 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:40.521 11:43:06 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:40.782 11:43:06 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.782 11:43:06 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:40.782 11:43:06 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.782 11:43:06 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:40.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.782 --rc genhtml_branch_coverage=1 00:04:40.782 --rc genhtml_function_coverage=1 00:04:40.782 --rc genhtml_legend=1 00:04:40.782 --rc geninfo_all_blocks=1 00:04:40.782 --rc geninfo_unexecuted_blocks=1 00:04:40.782 00:04:40.782 ' 00:04:40.782 11:43:06 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:40.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.782 --rc genhtml_branch_coverage=1 00:04:40.782 --rc genhtml_function_coverage=1 00:04:40.782 --rc genhtml_legend=1 00:04:40.782 --rc geninfo_all_blocks=1 00:04:40.782 --rc geninfo_unexecuted_blocks=1 00:04:40.782 00:04:40.782 ' 00:04:40.782 11:43:06 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:40.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.782 --rc genhtml_branch_coverage=1 00:04:40.782 --rc genhtml_function_coverage=1 00:04:40.782 --rc genhtml_legend=1 00:04:40.782 --rc geninfo_all_blocks=1 00:04:40.782 --rc geninfo_unexecuted_blocks=1 00:04:40.782 00:04:40.782 ' 00:04:40.782 11:43:06 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:40.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.782 --rc genhtml_branch_coverage=1 00:04:40.782 --rc genhtml_function_coverage=1 00:04:40.782 --rc genhtml_legend=1 00:04:40.782 --rc geninfo_all_blocks=1 00:04:40.782 --rc geninfo_unexecuted_blocks=1 00:04:40.782 00:04:40.782 ' 00:04:40.782 11:43:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:40.782 OK 00:04:40.782 11:43:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:40.782 00:04:40.782 real 0m0.306s 00:04:40.782 user 0m0.165s 00:04:40.782 sys 0m0.160s 00:04:40.782 11:43:07 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.782 11:43:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:40.782 ************************************ 00:04:40.782 END TEST rpc_client 00:04:40.782 ************************************ 00:04:40.782 11:43:07 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:40.782 11:43:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.782 11:43:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.782 11:43:07 -- common/autotest_common.sh@10 -- # set +x 00:04:40.782 ************************************ 00:04:40.782 START TEST json_config 00:04:40.782 ************************************ 00:04:40.782 11:43:07 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:41.043 11:43:07 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.043 11:43:07 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.043 11:43:07 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.043 11:43:07 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.043 11:43:07 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.043 11:43:07 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.043 11:43:07 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.043 11:43:07 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.043 11:43:07 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.043 11:43:07 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.043 11:43:07 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.043 11:43:07 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.043 11:43:07 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.043 11:43:07 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.043 11:43:07 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.043 11:43:07 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:41.043 11:43:07 json_config -- scripts/common.sh@345 -- # : 1 00:04:41.043 11:43:07 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.043 11:43:07 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.043 11:43:07 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:41.043 11:43:07 json_config -- scripts/common.sh@353 -- # local d=1 00:04:41.043 11:43:07 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.043 11:43:07 json_config -- scripts/common.sh@355 -- # echo 1 00:04:41.043 11:43:07 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.043 11:43:07 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:41.043 11:43:07 json_config -- scripts/common.sh@353 -- # local d=2 00:04:41.043 11:43:07 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.043 11:43:07 json_config -- scripts/common.sh@355 -- # echo 2 00:04:41.043 11:43:07 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.043 11:43:07 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.043 11:43:07 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.043 11:43:07 json_config -- scripts/common.sh@368 -- # return 0 00:04:41.043 11:43:07 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.043 11:43:07 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.043 --rc genhtml_branch_coverage=1 00:04:41.043 --rc genhtml_function_coverage=1 00:04:41.043 --rc genhtml_legend=1 00:04:41.043 --rc geninfo_all_blocks=1 00:04:41.043 --rc geninfo_unexecuted_blocks=1 00:04:41.043 00:04:41.043 ' 00:04:41.043 11:43:07 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.043 --rc genhtml_branch_coverage=1 00:04:41.043 --rc genhtml_function_coverage=1 00:04:41.043 --rc genhtml_legend=1 00:04:41.043 --rc geninfo_all_blocks=1 00:04:41.043 --rc geninfo_unexecuted_blocks=1 00:04:41.043 00:04:41.043 ' 00:04:41.043 11:43:07 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.043 --rc genhtml_branch_coverage=1 00:04:41.043 --rc genhtml_function_coverage=1 00:04:41.043 --rc genhtml_legend=1 00:04:41.043 --rc geninfo_all_blocks=1 00:04:41.043 --rc geninfo_unexecuted_blocks=1 00:04:41.043 00:04:41.043 ' 00:04:41.043 11:43:07 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.043 --rc genhtml_branch_coverage=1 00:04:41.043 --rc genhtml_function_coverage=1 00:04:41.043 --rc genhtml_legend=1 00:04:41.043 --rc geninfo_all_blocks=1 00:04:41.043 --rc geninfo_unexecuted_blocks=1 00:04:41.043 00:04:41.043 ' 00:04:41.043 11:43:07 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.043 11:43:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:41.043 11:43:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.043 11:43:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.043 11:43:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.043 11:43:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.043 11:43:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.043 11:43:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.043 11:43:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.043 11:43:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.043 11:43:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.043 11:43:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.043 11:43:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:205432e6-0d85-4ef4-92fc-cf1aa4632adc 00:04:41.043 11:43:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=205432e6-0d85-4ef4-92fc-cf1aa4632adc 00:04:41.043 11:43:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.043 11:43:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.044 11:43:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.044 11:43:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.044 11:43:07 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.044 11:43:07 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.044 11:43:07 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.044 11:43:07 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.044 11:43:07 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.044 11:43:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.044 11:43:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.044 11:43:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.044 11:43:07 json_config -- paths/export.sh@5 -- # export PATH 00:04:41.044 11:43:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.044 11:43:07 json_config -- nvmf/common.sh@51 -- # : 0 00:04:41.044 11:43:07 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.044 11:43:07 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.044 11:43:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.044 11:43:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.044 11:43:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.044 11:43:07 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.044 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.044 11:43:07 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.044 11:43:07 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.044 11:43:07 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.044 11:43:07 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:41.044 11:43:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:41.044 11:43:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:41.044 11:43:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:41.044 11:43:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:41.044 WARNING: No tests are enabled so not running JSON configuration tests 00:04:41.044 11:43:07 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:41.044 11:43:07 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:41.044 ************************************ 00:04:41.044 END TEST json_config 00:04:41.044 ************************************ 00:04:41.044 00:04:41.044 real 0m0.216s 00:04:41.044 user 0m0.127s 00:04:41.044 sys 0m0.098s 00:04:41.044 11:43:07 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.044 11:43:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.044 11:43:07 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:41.044 11:43:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.044 11:43:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.044 11:43:07 -- common/autotest_common.sh@10 -- # set +x 00:04:41.044 ************************************ 00:04:41.044 START TEST json_config_extra_key 00:04:41.044 ************************************ 00:04:41.044 11:43:07 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:41.305 11:43:07 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.305 11:43:07 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.305 11:43:07 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.305 11:43:07 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.305 11:43:07 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:41.305 11:43:07 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.305 11:43:07 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.305 --rc genhtml_branch_coverage=1 00:04:41.305 --rc genhtml_function_coverage=1 00:04:41.305 --rc genhtml_legend=1 00:04:41.305 --rc geninfo_all_blocks=1 00:04:41.305 --rc geninfo_unexecuted_blocks=1 00:04:41.305 00:04:41.305 ' 00:04:41.305 11:43:07 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.305 --rc genhtml_branch_coverage=1 00:04:41.305 --rc genhtml_function_coverage=1 00:04:41.305 --rc genhtml_legend=1 00:04:41.305 --rc geninfo_all_blocks=1 00:04:41.305 --rc geninfo_unexecuted_blocks=1 00:04:41.305 00:04:41.305 ' 00:04:41.305 11:43:07 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.305 --rc genhtml_branch_coverage=1 00:04:41.305 --rc genhtml_function_coverage=1 00:04:41.305 --rc genhtml_legend=1 00:04:41.305 --rc geninfo_all_blocks=1 00:04:41.305 --rc geninfo_unexecuted_blocks=1 00:04:41.305 00:04:41.305 ' 00:04:41.305 11:43:07 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.305 --rc genhtml_branch_coverage=1 00:04:41.305 --rc genhtml_function_coverage=1 00:04:41.305 --rc genhtml_legend=1 00:04:41.305 --rc geninfo_all_blocks=1 00:04:41.305 --rc geninfo_unexecuted_blocks=1 00:04:41.305 00:04:41.305 ' 00:04:41.306 11:43:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:205432e6-0d85-4ef4-92fc-cf1aa4632adc 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=205432e6-0d85-4ef4-92fc-cf1aa4632adc 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.306 11:43:07 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.306 11:43:07 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.306 11:43:07 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.306 11:43:07 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.306 11:43:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.306 11:43:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.306 11:43:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.306 11:43:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:41.306 11:43:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.306 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.306 11:43:07 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.306 11:43:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:41.306 11:43:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:41.306 11:43:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:41.306 11:43:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:41.306 11:43:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:41.306 11:43:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:41.306 11:43:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:41.306 11:43:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:41.306 11:43:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:41.306 11:43:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:41.306 INFO: launching applications... 00:04:41.306 11:43:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:41.306 11:43:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:41.306 11:43:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:41.306 11:43:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:41.306 11:43:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.306 11:43:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.306 11:43:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.306 11:43:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.306 11:43:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.306 11:43:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57653 00:04:41.306 Waiting for target to run... 00:04:41.306 11:43:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.306 11:43:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57653 /var/tmp/spdk_tgt.sock 00:04:41.306 11:43:07 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57653 ']' 00:04:41.306 11:43:07 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.306 11:43:07 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:41.306 11:43:07 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.306 11:43:07 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.306 11:43:07 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.306 11:43:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:41.567 [2024-11-27 11:43:07.733742] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:41.567 [2024-11-27 11:43:07.733904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57653 ] 00:04:41.827 [2024-11-27 11:43:08.114037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.087 [2024-11-27 11:43:08.234743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.658 11:43:08 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.658 11:43:08 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:42.658 00:04:42.658 11:43:08 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:42.658 INFO: shutting down applications... 00:04:42.658 11:43:08 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:42.658 11:43:08 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:42.658 11:43:08 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:42.658 11:43:08 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:42.658 11:43:08 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57653 ]] 00:04:42.658 11:43:08 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57653 00:04:42.658 11:43:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:42.658 11:43:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.658 11:43:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57653 00:04:42.658 11:43:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.226 11:43:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.226 11:43:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.226 11:43:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57653 00:04:43.226 11:43:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.795 11:43:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.795 11:43:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.795 11:43:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57653 00:04:43.795 11:43:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:44.364 11:43:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:44.364 11:43:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.364 11:43:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57653 00:04:44.364 11:43:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:44.624 11:43:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:44.624 11:43:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.624 11:43:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57653 00:04:44.624 11:43:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.193 11:43:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.193 11:43:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.193 11:43:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57653 00:04:45.193 11:43:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.763 11:43:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.763 11:43:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.763 11:43:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57653 00:04:45.763 11:43:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:45.763 11:43:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:45.763 11:43:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:45.763 SPDK target shutdown done 00:04:45.763 11:43:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:45.763 Success 00:04:45.763 11:43:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:45.763 00:04:45.763 real 0m4.609s 00:04:45.763 user 0m4.290s 00:04:45.763 sys 0m0.578s 00:04:45.763 11:43:12 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.763 11:43:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:45.763 ************************************ 00:04:45.763 END TEST json_config_extra_key 00:04:45.763 ************************************ 00:04:45.763 11:43:12 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:45.763 11:43:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.763 11:43:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.763 11:43:12 -- common/autotest_common.sh@10 -- # set +x 00:04:45.763 ************************************ 00:04:45.763 START TEST alias_rpc 00:04:45.763 ************************************ 00:04:45.763 11:43:12 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:46.067 * Looking for test storage... 00:04:46.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:46.067 11:43:12 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:46.067 11:43:12 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:46.067 11:43:12 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:46.067 11:43:12 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.067 11:43:12 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:46.067 11:43:12 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.067 11:43:12 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:46.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.067 --rc genhtml_branch_coverage=1 00:04:46.067 --rc genhtml_function_coverage=1 00:04:46.067 --rc genhtml_legend=1 00:04:46.067 --rc geninfo_all_blocks=1 00:04:46.067 --rc geninfo_unexecuted_blocks=1 00:04:46.067 00:04:46.067 ' 00:04:46.068 11:43:12 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:46.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.068 --rc genhtml_branch_coverage=1 00:04:46.068 --rc genhtml_function_coverage=1 00:04:46.068 --rc genhtml_legend=1 00:04:46.068 --rc geninfo_all_blocks=1 00:04:46.068 --rc geninfo_unexecuted_blocks=1 00:04:46.068 00:04:46.068 ' 00:04:46.068 11:43:12 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:46.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.068 --rc genhtml_branch_coverage=1 00:04:46.068 --rc genhtml_function_coverage=1 00:04:46.068 --rc genhtml_legend=1 00:04:46.068 --rc geninfo_all_blocks=1 00:04:46.068 --rc geninfo_unexecuted_blocks=1 00:04:46.068 00:04:46.068 ' 00:04:46.068 11:43:12 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:46.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.068 --rc genhtml_branch_coverage=1 00:04:46.068 --rc genhtml_function_coverage=1 00:04:46.068 --rc genhtml_legend=1 00:04:46.068 --rc geninfo_all_blocks=1 00:04:46.068 --rc geninfo_unexecuted_blocks=1 00:04:46.068 00:04:46.068 ' 00:04:46.068 11:43:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:46.068 11:43:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57765 00:04:46.068 11:43:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.068 11:43:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57765 00:04:46.068 11:43:12 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57765 ']' 00:04:46.068 11:43:12 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.068 11:43:12 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.068 11:43:12 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.068 11:43:12 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.068 11:43:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.068 [2024-11-27 11:43:12.408380] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:46.068 [2024-11-27 11:43:12.408521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57765 ] 00:04:46.349 [2024-11-27 11:43:12.591248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.349 [2024-11-27 11:43:12.727912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.729 11:43:13 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.729 11:43:13 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:47.729 11:43:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:47.729 11:43:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57765 00:04:47.729 11:43:13 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57765 ']' 00:04:47.729 11:43:13 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57765 00:04:47.729 11:43:13 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:47.729 11:43:13 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.729 11:43:13 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57765 00:04:47.729 11:43:14 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.729 11:43:14 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.729 killing process with pid 57765 00:04:47.729 11:43:14 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57765' 00:04:47.729 11:43:14 alias_rpc -- common/autotest_common.sh@973 -- # kill 57765 00:04:47.729 11:43:14 alias_rpc -- common/autotest_common.sh@978 -- # wait 57765 00:04:50.270 00:04:50.270 real 0m4.555s 00:04:50.270 user 0m4.354s 00:04:50.270 sys 0m0.750s 00:04:50.270 11:43:16 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.270 11:43:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.270 ************************************ 00:04:50.270 END TEST alias_rpc 00:04:50.270 ************************************ 00:04:50.530 11:43:16 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:50.530 11:43:16 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:50.530 11:43:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.530 11:43:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.530 11:43:16 -- common/autotest_common.sh@10 -- # set +x 00:04:50.530 ************************************ 00:04:50.530 START TEST spdkcli_tcp 00:04:50.530 ************************************ 00:04:50.530 11:43:16 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:50.530 * Looking for test storage... 00:04:50.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:50.530 11:43:16 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:50.530 11:43:16 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:50.530 11:43:16 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:50.530 11:43:16 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:50.530 11:43:16 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.530 11:43:16 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.530 11:43:16 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.530 11:43:16 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.530 11:43:16 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.530 11:43:16 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.530 11:43:16 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.530 11:43:16 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.530 11:43:16 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.530 11:43:16 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.530 11:43:16 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.530 11:43:16 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:50.530 11:43:16 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:50.790 11:43:16 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.790 11:43:16 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.790 11:43:16 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:50.790 11:43:16 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:50.790 11:43:16 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.790 11:43:16 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:50.790 11:43:16 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.790 11:43:16 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:50.790 11:43:16 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:50.790 11:43:16 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.790 11:43:16 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:50.790 11:43:16 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.790 11:43:16 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.790 11:43:16 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.790 11:43:16 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:50.790 11:43:16 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.790 11:43:16 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:50.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.790 --rc genhtml_branch_coverage=1 00:04:50.790 --rc genhtml_function_coverage=1 00:04:50.790 --rc genhtml_legend=1 00:04:50.790 --rc geninfo_all_blocks=1 00:04:50.790 --rc geninfo_unexecuted_blocks=1 00:04:50.790 00:04:50.790 ' 00:04:50.790 11:43:16 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:50.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.790 --rc genhtml_branch_coverage=1 00:04:50.790 --rc genhtml_function_coverage=1 00:04:50.790 --rc genhtml_legend=1 00:04:50.790 --rc geninfo_all_blocks=1 00:04:50.790 --rc geninfo_unexecuted_blocks=1 00:04:50.790 00:04:50.790 ' 00:04:50.790 11:43:16 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:50.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.790 --rc genhtml_branch_coverage=1 00:04:50.790 --rc genhtml_function_coverage=1 00:04:50.790 --rc genhtml_legend=1 00:04:50.790 --rc geninfo_all_blocks=1 00:04:50.790 --rc geninfo_unexecuted_blocks=1 00:04:50.790 00:04:50.790 ' 00:04:50.790 11:43:16 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:50.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.790 --rc genhtml_branch_coverage=1 00:04:50.790 --rc genhtml_function_coverage=1 00:04:50.790 --rc genhtml_legend=1 00:04:50.790 --rc geninfo_all_blocks=1 00:04:50.790 --rc geninfo_unexecuted_blocks=1 00:04:50.790 00:04:50.790 ' 00:04:50.790 11:43:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:50.790 11:43:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:50.790 11:43:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:50.790 11:43:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:50.790 11:43:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:50.790 11:43:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:50.790 11:43:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:50.790 11:43:16 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:50.790 11:43:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.790 11:43:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57877 00:04:50.790 11:43:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:50.790 11:43:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57877 00:04:50.790 11:43:16 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57877 ']' 00:04:50.790 11:43:16 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.790 11:43:16 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.790 11:43:16 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.790 11:43:16 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.790 11:43:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.790 [2024-11-27 11:43:17.038900] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:50.790 [2024-11-27 11:43:17.039042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57877 ] 00:04:51.050 [2024-11-27 11:43:17.214896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:51.050 [2024-11-27 11:43:17.350141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.050 [2024-11-27 11:43:17.350174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.988 11:43:18 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.988 11:43:18 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:51.988 11:43:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57894 00:04:51.988 11:43:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:51.988 11:43:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:52.249 [ 00:04:52.249 "bdev_malloc_delete", 00:04:52.249 "bdev_malloc_create", 00:04:52.249 "bdev_null_resize", 00:04:52.249 "bdev_null_delete", 00:04:52.249 "bdev_null_create", 00:04:52.249 "bdev_nvme_cuse_unregister", 00:04:52.249 "bdev_nvme_cuse_register", 00:04:52.249 "bdev_opal_new_user", 00:04:52.249 "bdev_opal_set_lock_state", 00:04:52.249 "bdev_opal_delete", 00:04:52.249 "bdev_opal_get_info", 00:04:52.249 "bdev_opal_create", 00:04:52.249 "bdev_nvme_opal_revert", 00:04:52.249 "bdev_nvme_opal_init", 00:04:52.249 "bdev_nvme_send_cmd", 00:04:52.249 "bdev_nvme_set_keys", 00:04:52.249 "bdev_nvme_get_path_iostat", 00:04:52.249 "bdev_nvme_get_mdns_discovery_info", 00:04:52.249 "bdev_nvme_stop_mdns_discovery", 00:04:52.249 "bdev_nvme_start_mdns_discovery", 00:04:52.249 "bdev_nvme_set_multipath_policy", 00:04:52.249 "bdev_nvme_set_preferred_path", 00:04:52.249 "bdev_nvme_get_io_paths", 00:04:52.249 "bdev_nvme_remove_error_injection", 00:04:52.249 "bdev_nvme_add_error_injection", 00:04:52.249 "bdev_nvme_get_discovery_info", 00:04:52.249 "bdev_nvme_stop_discovery", 00:04:52.249 "bdev_nvme_start_discovery", 00:04:52.249 "bdev_nvme_get_controller_health_info", 00:04:52.249 "bdev_nvme_disable_controller", 00:04:52.249 "bdev_nvme_enable_controller", 00:04:52.249 "bdev_nvme_reset_controller", 00:04:52.249 "bdev_nvme_get_transport_statistics", 00:04:52.249 "bdev_nvme_apply_firmware", 00:04:52.249 "bdev_nvme_detach_controller", 00:04:52.249 "bdev_nvme_get_controllers", 00:04:52.249 "bdev_nvme_attach_controller", 00:04:52.249 "bdev_nvme_set_hotplug", 00:04:52.249 "bdev_nvme_set_options", 00:04:52.249 "bdev_passthru_delete", 00:04:52.249 "bdev_passthru_create", 00:04:52.249 "bdev_lvol_set_parent_bdev", 00:04:52.249 "bdev_lvol_set_parent", 00:04:52.249 "bdev_lvol_check_shallow_copy", 00:04:52.249 "bdev_lvol_start_shallow_copy", 00:04:52.249 "bdev_lvol_grow_lvstore", 00:04:52.249 "bdev_lvol_get_lvols", 00:04:52.249 "bdev_lvol_get_lvstores", 00:04:52.249 "bdev_lvol_delete", 00:04:52.249 "bdev_lvol_set_read_only", 00:04:52.249 "bdev_lvol_resize", 00:04:52.249 "bdev_lvol_decouple_parent", 00:04:52.249 "bdev_lvol_inflate", 00:04:52.249 "bdev_lvol_rename", 00:04:52.249 "bdev_lvol_clone_bdev", 00:04:52.249 "bdev_lvol_clone", 00:04:52.249 "bdev_lvol_snapshot", 00:04:52.249 "bdev_lvol_create", 00:04:52.249 "bdev_lvol_delete_lvstore", 00:04:52.249 "bdev_lvol_rename_lvstore", 00:04:52.249 "bdev_lvol_create_lvstore", 00:04:52.249 "bdev_raid_set_options", 00:04:52.249 "bdev_raid_remove_base_bdev", 00:04:52.249 "bdev_raid_add_base_bdev", 00:04:52.249 "bdev_raid_delete", 00:04:52.249 "bdev_raid_create", 00:04:52.249 "bdev_raid_get_bdevs", 00:04:52.249 "bdev_error_inject_error", 00:04:52.249 "bdev_error_delete", 00:04:52.249 "bdev_error_create", 00:04:52.249 "bdev_split_delete", 00:04:52.249 "bdev_split_create", 00:04:52.249 "bdev_delay_delete", 00:04:52.249 "bdev_delay_create", 00:04:52.249 "bdev_delay_update_latency", 00:04:52.249 "bdev_zone_block_delete", 00:04:52.249 "bdev_zone_block_create", 00:04:52.249 "blobfs_create", 00:04:52.249 "blobfs_detect", 00:04:52.249 "blobfs_set_cache_size", 00:04:52.249 "bdev_aio_delete", 00:04:52.249 "bdev_aio_rescan", 00:04:52.249 "bdev_aio_create", 00:04:52.249 "bdev_ftl_set_property", 00:04:52.249 "bdev_ftl_get_properties", 00:04:52.249 "bdev_ftl_get_stats", 00:04:52.249 "bdev_ftl_unmap", 00:04:52.249 "bdev_ftl_unload", 00:04:52.249 "bdev_ftl_delete", 00:04:52.249 "bdev_ftl_load", 00:04:52.249 "bdev_ftl_create", 00:04:52.249 "bdev_virtio_attach_controller", 00:04:52.249 "bdev_virtio_scsi_get_devices", 00:04:52.249 "bdev_virtio_detach_controller", 00:04:52.249 "bdev_virtio_blk_set_hotplug", 00:04:52.249 "bdev_iscsi_delete", 00:04:52.249 "bdev_iscsi_create", 00:04:52.249 "bdev_iscsi_set_options", 00:04:52.249 "accel_error_inject_error", 00:04:52.249 "ioat_scan_accel_module", 00:04:52.249 "dsa_scan_accel_module", 00:04:52.249 "iaa_scan_accel_module", 00:04:52.249 "keyring_file_remove_key", 00:04:52.249 "keyring_file_add_key", 00:04:52.249 "keyring_linux_set_options", 00:04:52.249 "fsdev_aio_delete", 00:04:52.249 "fsdev_aio_create", 00:04:52.249 "iscsi_get_histogram", 00:04:52.249 "iscsi_enable_histogram", 00:04:52.249 "iscsi_set_options", 00:04:52.249 "iscsi_get_auth_groups", 00:04:52.249 "iscsi_auth_group_remove_secret", 00:04:52.249 "iscsi_auth_group_add_secret", 00:04:52.249 "iscsi_delete_auth_group", 00:04:52.249 "iscsi_create_auth_group", 00:04:52.249 "iscsi_set_discovery_auth", 00:04:52.249 "iscsi_get_options", 00:04:52.249 "iscsi_target_node_request_logout", 00:04:52.249 "iscsi_target_node_set_redirect", 00:04:52.249 "iscsi_target_node_set_auth", 00:04:52.249 "iscsi_target_node_add_lun", 00:04:52.249 "iscsi_get_stats", 00:04:52.249 "iscsi_get_connections", 00:04:52.249 "iscsi_portal_group_set_auth", 00:04:52.249 "iscsi_start_portal_group", 00:04:52.249 "iscsi_delete_portal_group", 00:04:52.249 "iscsi_create_portal_group", 00:04:52.249 "iscsi_get_portal_groups", 00:04:52.249 "iscsi_delete_target_node", 00:04:52.250 "iscsi_target_node_remove_pg_ig_maps", 00:04:52.250 "iscsi_target_node_add_pg_ig_maps", 00:04:52.250 "iscsi_create_target_node", 00:04:52.250 "iscsi_get_target_nodes", 00:04:52.250 "iscsi_delete_initiator_group", 00:04:52.250 "iscsi_initiator_group_remove_initiators", 00:04:52.250 "iscsi_initiator_group_add_initiators", 00:04:52.250 "iscsi_create_initiator_group", 00:04:52.250 "iscsi_get_initiator_groups", 00:04:52.250 "nvmf_set_crdt", 00:04:52.250 "nvmf_set_config", 00:04:52.250 "nvmf_set_max_subsystems", 00:04:52.250 "nvmf_stop_mdns_prr", 00:04:52.250 "nvmf_publish_mdns_prr", 00:04:52.250 "nvmf_subsystem_get_listeners", 00:04:52.250 "nvmf_subsystem_get_qpairs", 00:04:52.250 "nvmf_subsystem_get_controllers", 00:04:52.250 "nvmf_get_stats", 00:04:52.250 "nvmf_get_transports", 00:04:52.250 "nvmf_create_transport", 00:04:52.250 "nvmf_get_targets", 00:04:52.250 "nvmf_delete_target", 00:04:52.250 "nvmf_create_target", 00:04:52.250 "nvmf_subsystem_allow_any_host", 00:04:52.250 "nvmf_subsystem_set_keys", 00:04:52.250 "nvmf_subsystem_remove_host", 00:04:52.250 "nvmf_subsystem_add_host", 00:04:52.250 "nvmf_ns_remove_host", 00:04:52.250 "nvmf_ns_add_host", 00:04:52.250 "nvmf_subsystem_remove_ns", 00:04:52.250 "nvmf_subsystem_set_ns_ana_group", 00:04:52.250 "nvmf_subsystem_add_ns", 00:04:52.250 "nvmf_subsystem_listener_set_ana_state", 00:04:52.250 "nvmf_discovery_get_referrals", 00:04:52.250 "nvmf_discovery_remove_referral", 00:04:52.250 "nvmf_discovery_add_referral", 00:04:52.250 "nvmf_subsystem_remove_listener", 00:04:52.250 "nvmf_subsystem_add_listener", 00:04:52.250 "nvmf_delete_subsystem", 00:04:52.250 "nvmf_create_subsystem", 00:04:52.250 "nvmf_get_subsystems", 00:04:52.250 "env_dpdk_get_mem_stats", 00:04:52.250 "nbd_get_disks", 00:04:52.250 "nbd_stop_disk", 00:04:52.250 "nbd_start_disk", 00:04:52.250 "ublk_recover_disk", 00:04:52.250 "ublk_get_disks", 00:04:52.250 "ublk_stop_disk", 00:04:52.250 "ublk_start_disk", 00:04:52.250 "ublk_destroy_target", 00:04:52.250 "ublk_create_target", 00:04:52.250 "virtio_blk_create_transport", 00:04:52.250 "virtio_blk_get_transports", 00:04:52.250 "vhost_controller_set_coalescing", 00:04:52.250 "vhost_get_controllers", 00:04:52.250 "vhost_delete_controller", 00:04:52.250 "vhost_create_blk_controller", 00:04:52.250 "vhost_scsi_controller_remove_target", 00:04:52.250 "vhost_scsi_controller_add_target", 00:04:52.250 "vhost_start_scsi_controller", 00:04:52.250 "vhost_create_scsi_controller", 00:04:52.250 "thread_set_cpumask", 00:04:52.250 "scheduler_set_options", 00:04:52.250 "framework_get_governor", 00:04:52.250 "framework_get_scheduler", 00:04:52.250 "framework_set_scheduler", 00:04:52.250 "framework_get_reactors", 00:04:52.250 "thread_get_io_channels", 00:04:52.250 "thread_get_pollers", 00:04:52.250 "thread_get_stats", 00:04:52.250 "framework_monitor_context_switch", 00:04:52.250 "spdk_kill_instance", 00:04:52.250 "log_enable_timestamps", 00:04:52.250 "log_get_flags", 00:04:52.250 "log_clear_flag", 00:04:52.250 "log_set_flag", 00:04:52.250 "log_get_level", 00:04:52.250 "log_set_level", 00:04:52.250 "log_get_print_level", 00:04:52.250 "log_set_print_level", 00:04:52.250 "framework_enable_cpumask_locks", 00:04:52.250 "framework_disable_cpumask_locks", 00:04:52.250 "framework_wait_init", 00:04:52.250 "framework_start_init", 00:04:52.250 "scsi_get_devices", 00:04:52.250 "bdev_get_histogram", 00:04:52.250 "bdev_enable_histogram", 00:04:52.250 "bdev_set_qos_limit", 00:04:52.250 "bdev_set_qd_sampling_period", 00:04:52.250 "bdev_get_bdevs", 00:04:52.250 "bdev_reset_iostat", 00:04:52.250 "bdev_get_iostat", 00:04:52.250 "bdev_examine", 00:04:52.250 "bdev_wait_for_examine", 00:04:52.250 "bdev_set_options", 00:04:52.250 "accel_get_stats", 00:04:52.250 "accel_set_options", 00:04:52.250 "accel_set_driver", 00:04:52.250 "accel_crypto_key_destroy", 00:04:52.250 "accel_crypto_keys_get", 00:04:52.250 "accel_crypto_key_create", 00:04:52.250 "accel_assign_opc", 00:04:52.250 "accel_get_module_info", 00:04:52.250 "accel_get_opc_assignments", 00:04:52.250 "vmd_rescan", 00:04:52.250 "vmd_remove_device", 00:04:52.250 "vmd_enable", 00:04:52.250 "sock_get_default_impl", 00:04:52.250 "sock_set_default_impl", 00:04:52.250 "sock_impl_set_options", 00:04:52.250 "sock_impl_get_options", 00:04:52.250 "iobuf_get_stats", 00:04:52.250 "iobuf_set_options", 00:04:52.250 "keyring_get_keys", 00:04:52.250 "framework_get_pci_devices", 00:04:52.250 "framework_get_config", 00:04:52.250 "framework_get_subsystems", 00:04:52.250 "fsdev_set_opts", 00:04:52.250 "fsdev_get_opts", 00:04:52.250 "trace_get_info", 00:04:52.250 "trace_get_tpoint_group_mask", 00:04:52.250 "trace_disable_tpoint_group", 00:04:52.250 "trace_enable_tpoint_group", 00:04:52.250 "trace_clear_tpoint_mask", 00:04:52.250 "trace_set_tpoint_mask", 00:04:52.250 "notify_get_notifications", 00:04:52.250 "notify_get_types", 00:04:52.250 "spdk_get_version", 00:04:52.250 "rpc_get_methods" 00:04:52.250 ] 00:04:52.250 11:43:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:52.250 11:43:18 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:52.250 11:43:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:52.250 11:43:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:52.250 11:43:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57877 00:04:52.250 11:43:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57877 ']' 00:04:52.250 11:43:18 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57877 00:04:52.250 11:43:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:52.250 11:43:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.250 11:43:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57877 00:04:52.250 11:43:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.250 11:43:18 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.250 killing process with pid 57877 00:04:52.250 11:43:18 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57877' 00:04:52.250 11:43:18 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57877 00:04:52.250 11:43:18 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57877 00:04:54.790 00:04:54.790 real 0m4.446s 00:04:54.790 user 0m7.768s 00:04:54.790 sys 0m0.775s 00:04:54.790 11:43:21 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.790 11:43:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.790 ************************************ 00:04:54.790 END TEST spdkcli_tcp 00:04:54.790 ************************************ 00:04:55.049 11:43:21 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:55.049 11:43:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.049 11:43:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.049 11:43:21 -- common/autotest_common.sh@10 -- # set +x 00:04:55.049 ************************************ 00:04:55.049 START TEST dpdk_mem_utility 00:04:55.049 ************************************ 00:04:55.049 11:43:21 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:55.049 * Looking for test storage... 00:04:55.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:55.049 11:43:21 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.049 11:43:21 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.049 11:43:21 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.049 11:43:21 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.049 11:43:21 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:55.309 11:43:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:55.309 11:43:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.309 11:43:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:55.309 11:43:21 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.309 11:43:21 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.309 11:43:21 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.309 11:43:21 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:55.309 11:43:21 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.309 11:43:21 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.309 --rc genhtml_branch_coverage=1 00:04:55.309 --rc genhtml_function_coverage=1 00:04:55.309 --rc genhtml_legend=1 00:04:55.309 --rc geninfo_all_blocks=1 00:04:55.309 --rc geninfo_unexecuted_blocks=1 00:04:55.309 00:04:55.309 ' 00:04:55.309 11:43:21 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.309 --rc genhtml_branch_coverage=1 00:04:55.309 --rc genhtml_function_coverage=1 00:04:55.309 --rc genhtml_legend=1 00:04:55.309 --rc geninfo_all_blocks=1 00:04:55.309 --rc geninfo_unexecuted_blocks=1 00:04:55.309 00:04:55.309 ' 00:04:55.309 11:43:21 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.309 --rc genhtml_branch_coverage=1 00:04:55.309 --rc genhtml_function_coverage=1 00:04:55.309 --rc genhtml_legend=1 00:04:55.309 --rc geninfo_all_blocks=1 00:04:55.309 --rc geninfo_unexecuted_blocks=1 00:04:55.309 00:04:55.309 ' 00:04:55.309 11:43:21 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.309 --rc genhtml_branch_coverage=1 00:04:55.309 --rc genhtml_function_coverage=1 00:04:55.309 --rc genhtml_legend=1 00:04:55.309 --rc geninfo_all_blocks=1 00:04:55.309 --rc geninfo_unexecuted_blocks=1 00:04:55.309 00:04:55.309 ' 00:04:55.309 11:43:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:55.309 11:43:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58004 00:04:55.309 11:43:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.309 11:43:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58004 00:04:55.309 11:43:21 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58004 ']' 00:04:55.309 11:43:21 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.309 11:43:21 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.309 11:43:21 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.309 11:43:21 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.309 11:43:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.309 [2024-11-27 11:43:21.538122] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:55.309 [2024-11-27 11:43:21.538268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58004 ] 00:04:55.569 [2024-11-27 11:43:21.710754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.569 [2024-11-27 11:43:21.844327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.509 11:43:22 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.509 11:43:22 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:56.509 11:43:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:56.509 11:43:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:56.509 11:43:22 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.509 11:43:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.509 { 00:04:56.509 "filename": "/tmp/spdk_mem_dump.txt" 00:04:56.509 } 00:04:56.509 11:43:22 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.509 11:43:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:56.509 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:56.509 1 heaps totaling size 824.000000 MiB 00:04:56.509 size: 824.000000 MiB heap id: 0 00:04:56.509 end heaps---------- 00:04:56.509 9 mempools totaling size 603.782043 MiB 00:04:56.509 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:56.509 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:56.509 size: 100.555481 MiB name: bdev_io_58004 00:04:56.509 size: 50.003479 MiB name: msgpool_58004 00:04:56.509 size: 36.509338 MiB name: fsdev_io_58004 00:04:56.509 size: 21.763794 MiB name: PDU_Pool 00:04:56.509 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:56.509 size: 4.133484 MiB name: evtpool_58004 00:04:56.509 size: 0.026123 MiB name: Session_Pool 00:04:56.509 end mempools------- 00:04:56.509 6 memzones totaling size 4.142822 MiB 00:04:56.509 size: 1.000366 MiB name: RG_ring_0_58004 00:04:56.509 size: 1.000366 MiB name: RG_ring_1_58004 00:04:56.509 size: 1.000366 MiB name: RG_ring_4_58004 00:04:56.509 size: 1.000366 MiB name: RG_ring_5_58004 00:04:56.509 size: 0.125366 MiB name: RG_ring_2_58004 00:04:56.509 size: 0.015991 MiB name: RG_ring_3_58004 00:04:56.509 end memzones------- 00:04:56.770 11:43:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:56.770 heap id: 0 total size: 824.000000 MiB number of busy elements: 320 number of free elements: 18 00:04:56.770 list of free elements. size: 16.780151 MiB 00:04:56.770 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:56.770 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:56.770 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:56.770 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:56.770 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:56.770 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:56.770 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:56.770 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:56.770 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:56.770 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:56.770 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:56.770 element at address: 0x20001b400000 with size: 0.561707 MiB 00:04:56.770 element at address: 0x200000c00000 with size: 0.489197 MiB 00:04:56.770 element at address: 0x200019600000 with size: 0.487976 MiB 00:04:56.770 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:56.770 element at address: 0x200012c00000 with size: 0.433228 MiB 00:04:56.770 element at address: 0x200028800000 with size: 0.390442 MiB 00:04:56.770 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:56.770 list of standard malloc elements. size: 199.288940 MiB 00:04:56.770 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:56.770 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:56.770 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:56.770 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:56.770 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:56.770 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:56.770 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:56.770 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:56.770 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:56.770 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:56.770 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:56.771 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:56.771 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:56.772 element at address: 0x200028863f40 with size: 0.000244 MiB 00:04:56.772 element at address: 0x200028864040 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886af80 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886b080 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886b180 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886b280 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:56.772 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:56.773 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:56.773 list of memzone associated elements. size: 607.930908 MiB 00:04:56.773 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:56.773 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:56.773 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:56.773 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:56.773 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:56.773 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58004_0 00:04:56.773 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:56.773 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58004_0 00:04:56.773 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:56.773 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58004_0 00:04:56.773 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:56.773 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:56.773 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:56.773 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:56.773 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:56.773 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58004_0 00:04:56.773 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:56.773 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58004 00:04:56.773 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:56.773 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58004 00:04:56.773 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:56.773 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:56.773 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:56.773 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:56.773 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:56.773 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:56.773 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:56.773 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:56.773 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:56.773 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58004 00:04:56.773 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:56.773 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58004 00:04:56.773 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:56.773 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58004 00:04:56.773 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:56.773 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58004 00:04:56.773 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:56.773 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58004 00:04:56.773 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:56.773 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58004 00:04:56.773 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:56.773 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:56.773 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:56.773 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:56.773 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:56.773 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:56.773 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:56.773 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58004 00:04:56.773 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:56.773 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58004 00:04:56.773 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:56.773 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:56.773 element at address: 0x200028864140 with size: 0.023804 MiB 00:04:56.773 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:56.773 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:56.773 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58004 00:04:56.773 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:04:56.773 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:56.773 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:56.773 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58004 00:04:56.773 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:56.773 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58004 00:04:56.773 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:56.773 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58004 00:04:56.773 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:04:56.773 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:56.773 11:43:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:56.773 11:43:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58004 00:04:56.773 11:43:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58004 ']' 00:04:56.773 11:43:22 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58004 00:04:56.773 11:43:22 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:56.773 11:43:22 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.773 11:43:22 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58004 00:04:56.773 11:43:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.773 11:43:23 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.773 killing process with pid 58004 00:04:56.773 11:43:23 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58004' 00:04:56.773 11:43:23 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58004 00:04:56.773 11:43:23 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58004 00:04:59.377 00:04:59.377 real 0m4.348s 00:04:59.377 user 0m4.077s 00:04:59.377 sys 0m0.748s 00:04:59.377 11:43:25 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.377 11:43:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:59.377 ************************************ 00:04:59.377 END TEST dpdk_mem_utility 00:04:59.377 ************************************ 00:04:59.377 11:43:25 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:59.377 11:43:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.377 11:43:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.377 11:43:25 -- common/autotest_common.sh@10 -- # set +x 00:04:59.377 ************************************ 00:04:59.377 START TEST event 00:04:59.377 ************************************ 00:04:59.377 11:43:25 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:59.377 * Looking for test storage... 00:04:59.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:59.377 11:43:25 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:59.377 11:43:25 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:59.377 11:43:25 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:59.637 11:43:25 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:59.637 11:43:25 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.637 11:43:25 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.637 11:43:25 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.637 11:43:25 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.637 11:43:25 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.637 11:43:25 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.637 11:43:25 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.637 11:43:25 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.637 11:43:25 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.637 11:43:25 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.638 11:43:25 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.638 11:43:25 event -- scripts/common.sh@344 -- # case "$op" in 00:04:59.638 11:43:25 event -- scripts/common.sh@345 -- # : 1 00:04:59.638 11:43:25 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.638 11:43:25 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.638 11:43:25 event -- scripts/common.sh@365 -- # decimal 1 00:04:59.638 11:43:25 event -- scripts/common.sh@353 -- # local d=1 00:04:59.638 11:43:25 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.638 11:43:25 event -- scripts/common.sh@355 -- # echo 1 00:04:59.638 11:43:25 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.638 11:43:25 event -- scripts/common.sh@366 -- # decimal 2 00:04:59.638 11:43:25 event -- scripts/common.sh@353 -- # local d=2 00:04:59.638 11:43:25 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.638 11:43:25 event -- scripts/common.sh@355 -- # echo 2 00:04:59.638 11:43:25 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.638 11:43:25 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.638 11:43:25 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.638 11:43:25 event -- scripts/common.sh@368 -- # return 0 00:04:59.638 11:43:25 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.638 11:43:25 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:59.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.638 --rc genhtml_branch_coverage=1 00:04:59.638 --rc genhtml_function_coverage=1 00:04:59.638 --rc genhtml_legend=1 00:04:59.638 --rc geninfo_all_blocks=1 00:04:59.638 --rc geninfo_unexecuted_blocks=1 00:04:59.638 00:04:59.638 ' 00:04:59.638 11:43:25 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:59.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.638 --rc genhtml_branch_coverage=1 00:04:59.638 --rc genhtml_function_coverage=1 00:04:59.638 --rc genhtml_legend=1 00:04:59.638 --rc geninfo_all_blocks=1 00:04:59.638 --rc geninfo_unexecuted_blocks=1 00:04:59.638 00:04:59.638 ' 00:04:59.638 11:43:25 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:59.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.638 --rc genhtml_branch_coverage=1 00:04:59.638 --rc genhtml_function_coverage=1 00:04:59.638 --rc genhtml_legend=1 00:04:59.638 --rc geninfo_all_blocks=1 00:04:59.638 --rc geninfo_unexecuted_blocks=1 00:04:59.638 00:04:59.638 ' 00:04:59.638 11:43:25 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:59.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.638 --rc genhtml_branch_coverage=1 00:04:59.638 --rc genhtml_function_coverage=1 00:04:59.638 --rc genhtml_legend=1 00:04:59.638 --rc geninfo_all_blocks=1 00:04:59.638 --rc geninfo_unexecuted_blocks=1 00:04:59.638 00:04:59.638 ' 00:04:59.638 11:43:25 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:59.638 11:43:25 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:59.638 11:43:25 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:59.638 11:43:25 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:59.638 11:43:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.638 11:43:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.638 ************************************ 00:04:59.638 START TEST event_perf 00:04:59.638 ************************************ 00:04:59.638 11:43:25 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:59.638 Running I/O for 1 seconds...[2024-11-27 11:43:25.904359] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:04:59.638 [2024-11-27 11:43:25.904474] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58113 ] 00:04:59.896 [2024-11-27 11:43:26.081435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:59.896 Running I/O for 1 seconds...[2024-11-27 11:43:26.225197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.896 [2024-11-27 11:43:26.225336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.896 [2024-11-27 11:43:26.225497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.896 [2024-11-27 11:43:26.225530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.276 00:05:01.276 lcore 0: 101683 00:05:01.276 lcore 1: 101684 00:05:01.276 lcore 2: 101681 00:05:01.276 lcore 3: 101684 00:05:01.276 done. 00:05:01.276 00:05:01.276 real 0m1.625s 00:05:01.276 user 0m4.366s 00:05:01.276 sys 0m0.136s 00:05:01.276 11:43:27 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.276 11:43:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:01.276 ************************************ 00:05:01.276 END TEST event_perf 00:05:01.276 ************************************ 00:05:01.276 11:43:27 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:01.276 11:43:27 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:01.276 11:43:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.276 11:43:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.276 ************************************ 00:05:01.276 START TEST event_reactor 00:05:01.276 ************************************ 00:05:01.276 11:43:27 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:01.276 [2024-11-27 11:43:27.595223] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:01.276 [2024-11-27 11:43:27.595353] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58152 ] 00:05:01.535 [2024-11-27 11:43:27.770126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.535 [2024-11-27 11:43:27.911694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.915 test_start 00:05:02.915 oneshot 00:05:02.915 tick 100 00:05:02.915 tick 100 00:05:02.915 tick 250 00:05:02.915 tick 100 00:05:02.915 tick 100 00:05:02.915 tick 100 00:05:02.915 tick 250 00:05:02.915 tick 500 00:05:02.915 tick 100 00:05:02.915 tick 100 00:05:02.915 tick 250 00:05:02.915 tick 100 00:05:02.915 tick 100 00:05:02.915 test_end 00:05:02.915 00:05:02.915 real 0m1.602s 00:05:02.915 user 0m1.374s 00:05:02.915 sys 0m0.120s 00:05:02.915 ************************************ 00:05:02.915 END TEST event_reactor 00:05:02.915 ************************************ 00:05:02.915 11:43:29 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.915 11:43:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:02.915 11:43:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:02.915 11:43:29 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:02.915 11:43:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.915 11:43:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.915 ************************************ 00:05:02.915 START TEST event_reactor_perf 00:05:02.915 ************************************ 00:05:02.915 11:43:29 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:02.915 [2024-11-27 11:43:29.266032] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:02.915 [2024-11-27 11:43:29.266137] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58189 ] 00:05:03.174 [2024-11-27 11:43:29.438187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.433 [2024-11-27 11:43:29.568716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.813 test_start 00:05:04.813 test_end 00:05:04.813 Performance: 402438 events per second 00:05:04.813 00:05:04.813 real 0m1.590s 00:05:04.813 user 0m1.375s 00:05:04.813 sys 0m0.106s 00:05:04.813 11:43:30 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.813 11:43:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:04.813 ************************************ 00:05:04.813 END TEST event_reactor_perf 00:05:04.813 ************************************ 00:05:04.813 11:43:30 event -- event/event.sh@49 -- # uname -s 00:05:04.813 11:43:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:04.813 11:43:30 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:04.813 11:43:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.813 11:43:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.813 11:43:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.813 ************************************ 00:05:04.813 START TEST event_scheduler 00:05:04.813 ************************************ 00:05:04.813 11:43:30 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:04.813 * Looking for test storage... 00:05:04.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:04.813 11:43:31 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:04.813 11:43:31 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:04.813 11:43:31 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:04.813 11:43:31 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.813 11:43:31 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:04.813 11:43:31 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.813 11:43:31 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:04.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.813 --rc genhtml_branch_coverage=1 00:05:04.813 --rc genhtml_function_coverage=1 00:05:04.813 --rc genhtml_legend=1 00:05:04.813 --rc geninfo_all_blocks=1 00:05:04.813 --rc geninfo_unexecuted_blocks=1 00:05:04.813 00:05:04.813 ' 00:05:04.813 11:43:31 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:04.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.813 --rc genhtml_branch_coverage=1 00:05:04.813 --rc genhtml_function_coverage=1 00:05:04.813 --rc genhtml_legend=1 00:05:04.813 --rc geninfo_all_blocks=1 00:05:04.813 --rc geninfo_unexecuted_blocks=1 00:05:04.813 00:05:04.813 ' 00:05:04.813 11:43:31 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:04.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.813 --rc genhtml_branch_coverage=1 00:05:04.813 --rc genhtml_function_coverage=1 00:05:04.813 --rc genhtml_legend=1 00:05:04.813 --rc geninfo_all_blocks=1 00:05:04.813 --rc geninfo_unexecuted_blocks=1 00:05:04.813 00:05:04.813 ' 00:05:04.813 11:43:31 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:04.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.813 --rc genhtml_branch_coverage=1 00:05:04.813 --rc genhtml_function_coverage=1 00:05:04.813 --rc genhtml_legend=1 00:05:04.813 --rc geninfo_all_blocks=1 00:05:04.813 --rc geninfo_unexecuted_blocks=1 00:05:04.813 00:05:04.813 ' 00:05:04.813 11:43:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:04.813 11:43:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58265 00:05:04.813 11:43:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:04.813 11:43:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.813 11:43:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58265 00:05:04.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.813 11:43:31 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58265 ']' 00:05:04.813 11:43:31 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.813 11:43:31 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.813 11:43:31 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.813 11:43:31 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.813 11:43:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.073 [2024-11-27 11:43:31.201327] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:05.073 [2024-11-27 11:43:31.202000] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58265 ] 00:05:05.073 [2024-11-27 11:43:31.378399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:05.333 [2024-11-27 11:43:31.513971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.333 [2024-11-27 11:43:31.514220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.333 [2024-11-27 11:43:31.514362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.333 [2024-11-27 11:43:31.514406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.901 11:43:32 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.901 11:43:32 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:05.901 11:43:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:05.901 11:43:32 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.901 11:43:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.901 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.901 POWER: Cannot set governor of lcore 0 to userspace 00:05:05.901 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.901 POWER: Cannot set governor of lcore 0 to performance 00:05:05.901 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.901 POWER: Cannot set governor of lcore 0 to userspace 00:05:05.901 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.901 POWER: Cannot set governor of lcore 0 to userspace 00:05:05.901 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:05.901 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:05.901 POWER: Unable to set Power Management Environment for lcore 0 00:05:05.901 [2024-11-27 11:43:32.040094] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:05.901 [2024-11-27 11:43:32.040124] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:05.901 [2024-11-27 11:43:32.040137] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:05.901 [2024-11-27 11:43:32.040165] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:05.901 [2024-11-27 11:43:32.040175] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:05.901 [2024-11-27 11:43:32.040187] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:05.901 11:43:32 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.901 11:43:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:05.901 11:43:32 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.901 11:43:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.162 [2024-11-27 11:43:32.419527] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:06.162 11:43:32 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.162 11:43:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:06.162 11:43:32 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.162 11:43:32 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.162 11:43:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.162 ************************************ 00:05:06.162 START TEST scheduler_create_thread 00:05:06.162 ************************************ 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.162 2 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.162 3 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.162 4 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.162 5 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.162 6 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.162 7 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.162 8 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.162 9 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.162 10 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.162 11:43:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:06.163 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.163 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.163 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.163 11:43:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:06.163 11:43:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:06.163 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.163 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.422 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.422 11:43:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:06.422 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.422 11:43:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.800 11:43:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.800 11:43:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:07.800 11:43:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:07.800 11:43:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.800 11:43:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.738 11:43:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.738 00:05:08.738 real 0m2.620s 00:05:08.738 user 0m0.028s 00:05:08.738 sys 0m0.009s 00:05:08.738 11:43:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.738 11:43:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.738 ************************************ 00:05:08.738 END TEST scheduler_create_thread 00:05:08.738 ************************************ 00:05:08.738 11:43:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:08.738 11:43:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58265 00:05:08.738 11:43:35 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58265 ']' 00:05:08.738 11:43:35 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58265 00:05:08.738 11:43:35 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:08.738 11:43:35 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.738 11:43:35 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58265 00:05:08.998 killing process with pid 58265 00:05:08.998 11:43:35 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:08.998 11:43:35 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:08.998 11:43:35 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58265' 00:05:08.998 11:43:35 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58265 00:05:08.998 11:43:35 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58265 00:05:09.265 [2024-11-27 11:43:35.532592] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:10.656 ************************************ 00:05:10.656 END TEST event_scheduler 00:05:10.656 ************************************ 00:05:10.656 00:05:10.656 real 0m5.894s 00:05:10.656 user 0m9.867s 00:05:10.656 sys 0m0.588s 00:05:10.656 11:43:36 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.656 11:43:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.656 11:43:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:10.656 11:43:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:10.656 11:43:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.656 11:43:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.656 11:43:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.656 ************************************ 00:05:10.656 START TEST app_repeat 00:05:10.656 ************************************ 00:05:10.656 11:43:36 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:10.656 11:43:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.656 11:43:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.656 11:43:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:10.656 11:43:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.656 11:43:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:10.656 11:43:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:10.656 11:43:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:10.656 11:43:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58376 00:05:10.656 11:43:36 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:10.656 11:43:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.656 11:43:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58376' 00:05:10.656 Process app_repeat pid: 58376 00:05:10.656 spdk_app_start Round 0 00:05:10.656 11:43:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.656 11:43:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:10.656 11:43:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58376 /var/tmp/spdk-nbd.sock 00:05:10.656 11:43:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58376 ']' 00:05:10.656 11:43:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.656 11:43:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.656 11:43:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.656 11:43:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.656 11:43:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.656 [2024-11-27 11:43:36.920452] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:10.656 [2024-11-27 11:43:36.920608] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58376 ] 00:05:10.915 [2024-11-27 11:43:37.097779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.915 [2024-11-27 11:43:37.234238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.915 [2024-11-27 11:43:37.234282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.479 11:43:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.479 11:43:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:11.479 11:43:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.736 Malloc0 00:05:11.736 11:43:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.995 Malloc1 00:05:11.995 11:43:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.995 11:43:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.995 11:43:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.995 11:43:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.995 11:43:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.995 11:43:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.995 11:43:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.995 11:43:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.995 11:43:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.995 11:43:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.995 11:43:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.995 11:43:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.995 11:43:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.995 11:43:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.995 11:43:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.995 11:43:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.253 /dev/nbd0 00:05:12.253 11:43:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.253 11:43:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.253 11:43:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:12.253 11:43:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.253 11:43:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.253 11:43:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.253 11:43:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:12.253 11:43:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.253 11:43:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.253 11:43:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.253 11:43:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.253 1+0 records in 00:05:12.253 1+0 records out 00:05:12.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500525 s, 8.2 MB/s 00:05:12.253 11:43:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.253 11:43:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.253 11:43:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.253 11:43:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.253 11:43:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.253 11:43:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.253 11:43:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.253 11:43:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.511 /dev/nbd1 00:05:12.511 11:43:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.511 11:43:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.511 11:43:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:12.511 11:43:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.511 11:43:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.511 11:43:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.511 11:43:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:12.511 11:43:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.511 11:43:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.511 11:43:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.511 11:43:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.511 1+0 records in 00:05:12.511 1+0 records out 00:05:12.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293899 s, 13.9 MB/s 00:05:12.512 11:43:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.512 11:43:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.512 11:43:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.512 11:43:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.512 11:43:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.512 11:43:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.512 11:43:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.512 11:43:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.512 11:43:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.512 11:43:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.770 { 00:05:12.770 "nbd_device": "/dev/nbd0", 00:05:12.770 "bdev_name": "Malloc0" 00:05:12.770 }, 00:05:12.770 { 00:05:12.770 "nbd_device": "/dev/nbd1", 00:05:12.770 "bdev_name": "Malloc1" 00:05:12.770 } 00:05:12.770 ]' 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.770 { 00:05:12.770 "nbd_device": "/dev/nbd0", 00:05:12.770 "bdev_name": "Malloc0" 00:05:12.770 }, 00:05:12.770 { 00:05:12.770 "nbd_device": "/dev/nbd1", 00:05:12.770 "bdev_name": "Malloc1" 00:05:12.770 } 00:05:12.770 ]' 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.770 /dev/nbd1' 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.770 /dev/nbd1' 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.770 256+0 records in 00:05:12.770 256+0 records out 00:05:12.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136508 s, 76.8 MB/s 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.770 256+0 records in 00:05:12.770 256+0 records out 00:05:12.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258865 s, 40.5 MB/s 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.770 11:43:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.028 256+0 records in 00:05:13.028 256+0 records out 00:05:13.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286009 s, 36.7 MB/s 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.028 11:43:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.286 11:43:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.286 11:43:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.286 11:43:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.286 11:43:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.286 11:43:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.286 11:43:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.286 11:43:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.286 11:43:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.286 11:43:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.286 11:43:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.286 11:43:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.545 11:43:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.545 11:43:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.545 11:43:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.545 11:43:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.545 11:43:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.545 11:43:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.545 11:43:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.545 11:43:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.545 11:43:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.545 11:43:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.545 11:43:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.545 11:43:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.545 11:43:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.114 11:43:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:15.493 [2024-11-27 11:43:41.507311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.493 [2024-11-27 11:43:41.636316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.493 [2024-11-27 11:43:41.636319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.493 [2024-11-27 11:43:41.852002] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.493 [2024-11-27 11:43:41.852085] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.398 spdk_app_start Round 1 00:05:17.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.398 11:43:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.398 11:43:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:17.398 11:43:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58376 /var/tmp/spdk-nbd.sock 00:05:17.398 11:43:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58376 ']' 00:05:17.398 11:43:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.398 11:43:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.398 11:43:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.398 11:43:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.398 11:43:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.398 11:43:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.398 11:43:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:17.398 11:43:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.398 Malloc0 00:05:17.398 11:43:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.656 Malloc1 00:05:17.656 11:43:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.656 11:43:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.656 11:43:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.656 11:43:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.656 11:43:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.656 11:43:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.656 11:43:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.656 11:43:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.656 11:43:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.656 11:43:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.657 11:43:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.657 11:43:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.657 11:43:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.657 11:43:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.657 11:43:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.657 11:43:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.915 /dev/nbd0 00:05:17.915 11:43:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.915 11:43:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.915 11:43:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:17.915 11:43:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:17.915 11:43:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:17.915 11:43:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:17.915 11:43:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:17.915 11:43:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:17.915 11:43:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:17.915 11:43:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:17.915 11:43:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.915 1+0 records in 00:05:17.915 1+0 records out 00:05:17.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341144 s, 12.0 MB/s 00:05:17.916 11:43:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.916 11:43:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:17.916 11:43:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.916 11:43:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:17.916 11:43:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:17.916 11:43:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.916 11:43:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.916 11:43:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.175 /dev/nbd1 00:05:18.175 11:43:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.175 11:43:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.175 11:43:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:18.175 11:43:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:18.175 11:43:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:18.175 11:43:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:18.175 11:43:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:18.175 11:43:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:18.175 11:43:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:18.175 11:43:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:18.175 11:43:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.175 1+0 records in 00:05:18.175 1+0 records out 00:05:18.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329228 s, 12.4 MB/s 00:05:18.175 11:43:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.175 11:43:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:18.175 11:43:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.175 11:43:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:18.175 11:43:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:18.175 11:43:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.175 11:43:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.175 11:43:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.175 11:43:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.175 11:43:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.434 11:43:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.434 { 00:05:18.434 "nbd_device": "/dev/nbd0", 00:05:18.434 "bdev_name": "Malloc0" 00:05:18.434 }, 00:05:18.435 { 00:05:18.435 "nbd_device": "/dev/nbd1", 00:05:18.435 "bdev_name": "Malloc1" 00:05:18.435 } 00:05:18.435 ]' 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.435 { 00:05:18.435 "nbd_device": "/dev/nbd0", 00:05:18.435 "bdev_name": "Malloc0" 00:05:18.435 }, 00:05:18.435 { 00:05:18.435 "nbd_device": "/dev/nbd1", 00:05:18.435 "bdev_name": "Malloc1" 00:05:18.435 } 00:05:18.435 ]' 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.435 /dev/nbd1' 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.435 /dev/nbd1' 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.435 256+0 records in 00:05:18.435 256+0 records out 00:05:18.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138411 s, 75.8 MB/s 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.435 256+0 records in 00:05:18.435 256+0 records out 00:05:18.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214967 s, 48.8 MB/s 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.435 256+0 records in 00:05:18.435 256+0 records out 00:05:18.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233068 s, 45.0 MB/s 00:05:18.435 11:43:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.694 11:43:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.694 11:43:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.694 11:43:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.694 11:43:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.694 11:43:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.694 11:43:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.694 11:43:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.694 11:43:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.694 11:43:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.694 11:43:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.694 11:43:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.694 11:43:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.694 11:43:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.694 11:43:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.694 11:43:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.694 11:43:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.694 11:43:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.694 11:43:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.694 11:43:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.694 11:43:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.694 11:43:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.694 11:43:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.694 11:43:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.694 11:43:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.694 11:43:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.694 11:43:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.694 11:43:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.694 11:43:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.953 11:43:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.953 11:43:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.953 11:43:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.953 11:43:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.953 11:43:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.953 11:43:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.953 11:43:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.953 11:43:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.953 11:43:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.953 11:43:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.953 11:43:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.211 11:43:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.211 11:43:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.211 11:43:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.211 11:43:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.211 11:43:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.211 11:43:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.211 11:43:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.211 11:43:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.211 11:43:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.211 11:43:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.211 11:43:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.211 11:43:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.211 11:43:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.780 11:43:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:20.718 [2024-11-27 11:43:47.002910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.978 [2024-11-27 11:43:47.103865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.978 [2024-11-27 11:43:47.103907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.978 [2024-11-27 11:43:47.295226] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.978 [2024-11-27 11:43:47.295291] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.887 spdk_app_start Round 2 00:05:22.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.887 11:43:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.887 11:43:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:22.887 11:43:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58376 /var/tmp/spdk-nbd.sock 00:05:22.887 11:43:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58376 ']' 00:05:22.887 11:43:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.887 11:43:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.887 11:43:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.887 11:43:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.887 11:43:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.887 11:43:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.887 11:43:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:22.887 11:43:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.147 Malloc0 00:05:23.147 11:43:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.405 Malloc1 00:05:23.405 11:43:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.405 11:43:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.405 11:43:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.405 11:43:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.405 11:43:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.405 11:43:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.405 11:43:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.405 11:43:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.405 11:43:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.405 11:43:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.405 11:43:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.405 11:43:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.405 11:43:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.405 11:43:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.405 11:43:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.405 11:43:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.662 /dev/nbd0 00:05:23.662 11:43:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.662 11:43:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.662 11:43:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:23.662 11:43:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.662 11:43:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.662 11:43:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.662 11:43:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:23.662 11:43:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.662 11:43:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.662 11:43:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.662 11:43:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.662 1+0 records in 00:05:23.662 1+0 records out 00:05:23.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262848 s, 15.6 MB/s 00:05:23.662 11:43:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.662 11:43:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.662 11:43:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.662 11:43:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.662 11:43:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.662 11:43:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.662 11:43:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.662 11:43:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.920 /dev/nbd1 00:05:23.920 11:43:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.920 11:43:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.920 11:43:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:23.920 11:43:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.920 11:43:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.920 11:43:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.920 11:43:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:23.920 11:43:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.920 11:43:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.920 11:43:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.920 11:43:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.920 1+0 records in 00:05:23.920 1+0 records out 00:05:23.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240077 s, 17.1 MB/s 00:05:23.920 11:43:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.920 11:43:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.920 11:43:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.921 11:43:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.921 11:43:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.921 11:43:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.921 11:43:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.921 11:43:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.921 11:43:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.921 11:43:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.180 { 00:05:24.180 "nbd_device": "/dev/nbd0", 00:05:24.180 "bdev_name": "Malloc0" 00:05:24.180 }, 00:05:24.180 { 00:05:24.180 "nbd_device": "/dev/nbd1", 00:05:24.180 "bdev_name": "Malloc1" 00:05:24.180 } 00:05:24.180 ]' 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.180 { 00:05:24.180 "nbd_device": "/dev/nbd0", 00:05:24.180 "bdev_name": "Malloc0" 00:05:24.180 }, 00:05:24.180 { 00:05:24.180 "nbd_device": "/dev/nbd1", 00:05:24.180 "bdev_name": "Malloc1" 00:05:24.180 } 00:05:24.180 ]' 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.180 /dev/nbd1' 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.180 /dev/nbd1' 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.180 256+0 records in 00:05:24.180 256+0 records out 00:05:24.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013803 s, 76.0 MB/s 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.180 256+0 records in 00:05:24.180 256+0 records out 00:05:24.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215309 s, 48.7 MB/s 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.180 256+0 records in 00:05:24.180 256+0 records out 00:05:24.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232745 s, 45.1 MB/s 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.180 11:43:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.440 11:43:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.440 11:43:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.440 11:43:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.440 11:43:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.440 11:43:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.440 11:43:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.440 11:43:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.440 11:43:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.440 11:43:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.440 11:43:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.700 11:43:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.700 11:43:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.700 11:43:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.700 11:43:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.700 11:43:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.700 11:43:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.700 11:43:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.700 11:43:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.700 11:43:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.700 11:43:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.700 11:43:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.960 11:43:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.960 11:43:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.960 11:43:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.960 11:43:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.960 11:43:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.960 11:43:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.960 11:43:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.960 11:43:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.960 11:43:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.960 11:43:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.960 11:43:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.960 11:43:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.960 11:43:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.220 11:43:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:26.600 [2024-11-27 11:43:52.677161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.600 [2024-11-27 11:43:52.783334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.600 [2024-11-27 11:43:52.783341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.600 [2024-11-27 11:43:52.963354] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.600 [2024-11-27 11:43:52.963421] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.507 11:43:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58376 /var/tmp/spdk-nbd.sock 00:05:28.507 11:43:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58376 ']' 00:05:28.507 11:43:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.507 11:43:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.507 11:43:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.507 11:43:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.507 11:43:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.507 11:43:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.507 11:43:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:28.507 11:43:54 event.app_repeat -- event/event.sh@39 -- # killprocess 58376 00:05:28.507 11:43:54 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58376 ']' 00:05:28.507 11:43:54 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58376 00:05:28.507 11:43:54 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:28.507 11:43:54 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.507 11:43:54 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58376 00:05:28.507 killing process with pid 58376 00:05:28.507 11:43:54 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.507 11:43:54 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.507 11:43:54 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58376' 00:05:28.507 11:43:54 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58376 00:05:28.507 11:43:54 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58376 00:05:29.885 spdk_app_start is called in Round 0. 00:05:29.885 Shutdown signal received, stop current app iteration 00:05:29.885 Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 reinitialization... 00:05:29.885 spdk_app_start is called in Round 1. 00:05:29.885 Shutdown signal received, stop current app iteration 00:05:29.885 Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 reinitialization... 00:05:29.885 spdk_app_start is called in Round 2. 00:05:29.885 Shutdown signal received, stop current app iteration 00:05:29.885 Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 reinitialization... 00:05:29.885 spdk_app_start is called in Round 3. 00:05:29.885 Shutdown signal received, stop current app iteration 00:05:29.885 11:43:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:29.885 11:43:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:29.885 00:05:29.885 real 0m19.020s 00:05:29.885 user 0m40.304s 00:05:29.885 sys 0m2.875s 00:05:29.885 ************************************ 00:05:29.885 END TEST app_repeat 00:05:29.885 ************************************ 00:05:29.885 11:43:55 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.885 11:43:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.885 11:43:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:29.885 11:43:55 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:29.885 11:43:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.885 11:43:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.885 11:43:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.885 ************************************ 00:05:29.885 START TEST cpu_locks 00:05:29.885 ************************************ 00:05:29.885 11:43:55 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:29.885 * Looking for test storage... 00:05:29.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:29.885 11:43:56 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:29.885 11:43:56 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:29.885 11:43:56 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:29.885 11:43:56 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.885 11:43:56 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:29.885 11:43:56 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.885 11:43:56 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:29.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.885 --rc genhtml_branch_coverage=1 00:05:29.885 --rc genhtml_function_coverage=1 00:05:29.885 --rc genhtml_legend=1 00:05:29.885 --rc geninfo_all_blocks=1 00:05:29.885 --rc geninfo_unexecuted_blocks=1 00:05:29.885 00:05:29.885 ' 00:05:29.885 11:43:56 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:29.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.885 --rc genhtml_branch_coverage=1 00:05:29.885 --rc genhtml_function_coverage=1 00:05:29.885 --rc genhtml_legend=1 00:05:29.885 --rc geninfo_all_blocks=1 00:05:29.885 --rc geninfo_unexecuted_blocks=1 00:05:29.885 00:05:29.885 ' 00:05:29.885 11:43:56 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:29.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.885 --rc genhtml_branch_coverage=1 00:05:29.885 --rc genhtml_function_coverage=1 00:05:29.885 --rc genhtml_legend=1 00:05:29.885 --rc geninfo_all_blocks=1 00:05:29.885 --rc geninfo_unexecuted_blocks=1 00:05:29.885 00:05:29.885 ' 00:05:29.885 11:43:56 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:29.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.885 --rc genhtml_branch_coverage=1 00:05:29.885 --rc genhtml_function_coverage=1 00:05:29.885 --rc genhtml_legend=1 00:05:29.885 --rc geninfo_all_blocks=1 00:05:29.885 --rc geninfo_unexecuted_blocks=1 00:05:29.885 00:05:29.885 ' 00:05:29.885 11:43:56 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:29.885 11:43:56 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:29.885 11:43:56 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:29.885 11:43:56 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:29.885 11:43:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.885 11:43:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.885 11:43:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.885 ************************************ 00:05:29.885 START TEST default_locks 00:05:29.885 ************************************ 00:05:29.885 11:43:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:29.885 11:43:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58818 00:05:29.885 11:43:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.885 11:43:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58818 00:05:29.885 11:43:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58818 ']' 00:05:29.885 11:43:56 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.885 11:43:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.885 11:43:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.886 11:43:56 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.886 11:43:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.144 [2024-11-27 11:43:56.275844] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:30.144 [2024-11-27 11:43:56.276058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58818 ] 00:05:30.144 [2024-11-27 11:43:56.448636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.405 [2024-11-27 11:43:56.560448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.344 11:43:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.344 11:43:57 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:31.344 11:43:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58818 00:05:31.344 11:43:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58818 00:05:31.344 11:43:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.604 11:43:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58818 00:05:31.604 11:43:57 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58818 ']' 00:05:31.604 11:43:57 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58818 00:05:31.604 11:43:57 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:31.604 11:43:57 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.604 11:43:57 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58818 00:05:31.604 killing process with pid 58818 00:05:31.604 11:43:57 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.604 11:43:57 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.604 11:43:57 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58818' 00:05:31.604 11:43:57 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58818 00:05:31.604 11:43:57 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58818 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58818 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58818 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58818 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58818 ']' 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.170 ERROR: process (pid: 58818) is no longer running 00:05:34.170 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58818) - No such process 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:34.170 00:05:34.170 real 0m4.033s 00:05:34.170 user 0m3.961s 00:05:34.170 sys 0m0.654s 00:05:34.170 ************************************ 00:05:34.170 END TEST default_locks 00:05:34.170 ************************************ 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.170 11:44:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.170 11:44:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:34.170 11:44:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.170 11:44:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.170 11:44:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.170 ************************************ 00:05:34.170 START TEST default_locks_via_rpc 00:05:34.170 ************************************ 00:05:34.170 11:44:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:34.170 11:44:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58893 00:05:34.170 11:44:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.170 11:44:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58893 00:05:34.170 11:44:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58893 ']' 00:05:34.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.170 11:44:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.170 11:44:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.170 11:44:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.170 11:44:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.170 11:44:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.170 [2024-11-27 11:44:00.376291] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:34.170 [2024-11-27 11:44:00.376411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58893 ] 00:05:34.170 [2024-11-27 11:44:00.550584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.430 [2024-11-27 11:44:00.664783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.370 11:44:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.370 11:44:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:35.370 11:44:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:35.370 11:44:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.370 11:44:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.370 11:44:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.370 11:44:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:35.370 11:44:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:35.370 11:44:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:35.370 11:44:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:35.370 11:44:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:35.370 11:44:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.370 11:44:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.370 11:44:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.370 11:44:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58893 00:05:35.370 11:44:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58893 00:05:35.370 11:44:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.628 11:44:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58893 00:05:35.628 11:44:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58893 ']' 00:05:35.628 11:44:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58893 00:05:35.628 11:44:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:35.628 11:44:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.628 11:44:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58893 00:05:35.887 killing process with pid 58893 00:05:35.887 11:44:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.887 11:44:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.887 11:44:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58893' 00:05:35.887 11:44:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58893 00:05:35.887 11:44:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58893 00:05:38.453 00:05:38.453 real 0m4.133s 00:05:38.453 user 0m4.091s 00:05:38.453 sys 0m0.672s 00:05:38.453 ************************************ 00:05:38.453 END TEST default_locks_via_rpc 00:05:38.453 ************************************ 00:05:38.453 11:44:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.453 11:44:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.453 11:44:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:38.453 11:44:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.453 11:44:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.453 11:44:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.453 ************************************ 00:05:38.453 START TEST non_locking_app_on_locked_coremask 00:05:38.453 ************************************ 00:05:38.453 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:38.453 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58967 00:05:38.453 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.453 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58967 /var/tmp/spdk.sock 00:05:38.453 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58967 ']' 00:05:38.453 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.453 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.453 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.453 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.453 11:44:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.453 [2024-11-27 11:44:04.585270] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:38.453 [2024-11-27 11:44:04.585485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58967 ] 00:05:38.453 [2024-11-27 11:44:04.759189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.713 [2024-11-27 11:44:04.870502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.652 11:44:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.652 11:44:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:39.652 11:44:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:39.652 11:44:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58989 00:05:39.652 11:44:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58989 /var/tmp/spdk2.sock 00:05:39.652 11:44:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58989 ']' 00:05:39.652 11:44:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.652 11:44:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.652 11:44:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.652 11:44:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.652 11:44:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.652 [2024-11-27 11:44:05.811923] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:39.652 [2024-11-27 11:44:05.812148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58989 ] 00:05:39.652 [2024-11-27 11:44:05.985019] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.652 [2024-11-27 11:44:05.985074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.912 [2024-11-27 11:44:06.218561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.451 11:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.451 11:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:42.451 11:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58967 00:05:42.451 11:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58967 00:05:42.451 11:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.451 11:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58967 00:05:42.451 11:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58967 ']' 00:05:42.451 11:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58967 00:05:42.451 11:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:42.451 11:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.451 11:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58967 00:05:42.710 killing process with pid 58967 00:05:42.710 11:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.710 11:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.710 11:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58967' 00:05:42.710 11:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58967 00:05:42.710 11:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58967 00:05:48.064 11:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58989 00:05:48.064 11:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58989 ']' 00:05:48.064 11:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58989 00:05:48.064 11:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:48.064 11:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.064 11:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58989 00:05:48.064 killing process with pid 58989 00:05:48.064 11:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.064 11:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.064 11:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58989' 00:05:48.064 11:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58989 00:05:48.064 11:44:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58989 00:05:49.980 ************************************ 00:05:49.980 END TEST non_locking_app_on_locked_coremask 00:05:49.980 ************************************ 00:05:49.980 00:05:49.980 real 0m11.465s 00:05:49.980 user 0m11.659s 00:05:49.980 sys 0m1.220s 00:05:49.980 11:44:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.980 11:44:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.980 11:44:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:49.980 11:44:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.980 11:44:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.980 11:44:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.980 ************************************ 00:05:49.980 START TEST locking_app_on_unlocked_coremask 00:05:49.980 ************************************ 00:05:49.980 11:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:49.980 11:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59139 00:05:49.980 11:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:49.980 11:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59139 /var/tmp/spdk.sock 00:05:49.980 11:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59139 ']' 00:05:49.980 11:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.980 11:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.980 11:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.980 11:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.980 11:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.980 [2024-11-27 11:44:16.113729] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:49.980 [2024-11-27 11:44:16.113939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59139 ] 00:05:49.980 [2024-11-27 11:44:16.289334] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.980 [2024-11-27 11:44:16.289475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.240 [2024-11-27 11:44:16.404941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.179 11:44:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.179 11:44:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:51.179 11:44:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59155 00:05:51.179 11:44:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:51.179 11:44:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59155 /var/tmp/spdk2.sock 00:05:51.179 11:44:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59155 ']' 00:05:51.179 11:44:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.179 11:44:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.179 11:44:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.179 11:44:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.179 11:44:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.179 [2024-11-27 11:44:17.338091] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:05:51.179 [2024-11-27 11:44:17.338290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59155 ] 00:05:51.179 [2024-11-27 11:44:17.506516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.438 [2024-11-27 11:44:17.733491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.975 11:44:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.975 11:44:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:53.975 11:44:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59155 00:05:53.975 11:44:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59155 00:05:53.975 11:44:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.975 11:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59139 00:05:53.975 11:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59139 ']' 00:05:53.975 11:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59139 00:05:53.975 11:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.975 11:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.975 11:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59139 00:05:53.975 killing process with pid 59139 00:05:53.975 11:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.975 11:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.975 11:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59139' 00:05:53.975 11:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59139 00:05:53.975 11:44:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59139 00:05:59.257 11:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59155 00:05:59.257 11:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59155 ']' 00:05:59.257 11:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59155 00:05:59.257 11:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.257 11:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.257 11:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59155 00:05:59.257 killing process with pid 59155 00:05:59.257 11:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.257 11:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.257 11:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59155' 00:05:59.257 11:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59155 00:05:59.257 11:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59155 00:06:01.168 00:06:01.168 real 0m11.304s 00:06:01.168 user 0m11.492s 00:06:01.168 sys 0m1.211s 00:06:01.168 11:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.168 11:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.168 ************************************ 00:06:01.168 END TEST locking_app_on_unlocked_coremask 00:06:01.168 ************************************ 00:06:01.168 11:44:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:01.168 11:44:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.168 11:44:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.168 11:44:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.168 ************************************ 00:06:01.168 START TEST locking_app_on_locked_coremask 00:06:01.168 ************************************ 00:06:01.168 11:44:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:01.168 11:44:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59301 00:06:01.168 11:44:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.168 11:44:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59301 /var/tmp/spdk.sock 00:06:01.168 11:44:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59301 ']' 00:06:01.168 11:44:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.168 11:44:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.168 11:44:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.168 11:44:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.168 11:44:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.168 [2024-11-27 11:44:27.483753] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:01.168 [2024-11-27 11:44:27.483973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59301 ] 00:06:01.427 [2024-11-27 11:44:27.660502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.427 [2024-11-27 11:44:27.776968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.366 11:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.366 11:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:02.366 11:44:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59322 00:06:02.366 11:44:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59322 /var/tmp/spdk2.sock 00:06:02.366 11:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:02.366 11:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59322 /var/tmp/spdk2.sock 00:06:02.366 11:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:02.366 11:44:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.366 11:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.366 11:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:02.366 11:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.366 11:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59322 /var/tmp/spdk2.sock 00:06:02.366 11:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59322 ']' 00:06:02.366 11:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.366 11:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.366 11:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.366 11:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.366 11:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.366 [2024-11-27 11:44:28.736427] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:02.366 [2024-11-27 11:44:28.736648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59322 ] 00:06:02.625 [2024-11-27 11:44:28.905902] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59301 has claimed it. 00:06:02.625 [2024-11-27 11:44:28.905982] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:03.194 ERROR: process (pid: 59322) is no longer running 00:06:03.194 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59322) - No such process 00:06:03.194 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.194 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:03.194 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:03.194 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:03.194 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:03.194 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:03.194 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59301 00:06:03.194 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59301 00:06:03.194 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.453 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59301 00:06:03.453 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59301 ']' 00:06:03.453 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59301 00:06:03.713 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.713 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.713 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59301 00:06:03.713 killing process with pid 59301 00:06:03.713 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.713 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.713 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59301' 00:06:03.713 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59301 00:06:03.713 11:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59301 00:06:06.253 00:06:06.253 real 0m4.863s 00:06:06.253 user 0m5.024s 00:06:06.253 sys 0m0.824s 00:06:06.253 11:44:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.253 ************************************ 00:06:06.253 END TEST locking_app_on_locked_coremask 00:06:06.253 ************************************ 00:06:06.253 11:44:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.253 11:44:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:06.253 11:44:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.253 11:44:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.253 11:44:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.253 ************************************ 00:06:06.253 START TEST locking_overlapped_coremask 00:06:06.253 ************************************ 00:06:06.253 11:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:06.253 11:44:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59386 00:06:06.253 11:44:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:06.253 11:44:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59386 /var/tmp/spdk.sock 00:06:06.253 11:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59386 ']' 00:06:06.253 11:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.253 11:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.253 11:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.253 11:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.253 11:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.253 [2024-11-27 11:44:32.412569] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:06.254 [2024-11-27 11:44:32.412688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59386 ] 00:06:06.254 [2024-11-27 11:44:32.587733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.513 [2024-11-27 11:44:32.706710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.513 [2024-11-27 11:44:32.706845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.513 [2024-11-27 11:44:32.706901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.452 11:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.452 11:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:07.452 11:44:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59410 00:06:07.452 11:44:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:07.452 11:44:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59410 /var/tmp/spdk2.sock 00:06:07.452 11:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:07.452 11:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59410 /var/tmp/spdk2.sock 00:06:07.452 11:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:07.452 11:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.452 11:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:07.452 11:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.452 11:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59410 /var/tmp/spdk2.sock 00:06:07.452 11:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59410 ']' 00:06:07.452 11:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.452 11:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.452 11:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.452 11:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.452 11:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.452 [2024-11-27 11:44:33.686699] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:07.452 [2024-11-27 11:44:33.686910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59410 ] 00:06:07.712 [2024-11-27 11:44:33.861980] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59386 has claimed it. 00:06:07.712 [2024-11-27 11:44:33.862070] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:07.971 ERROR: process (pid: 59410) is no longer running 00:06:07.971 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59410) - No such process 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59386 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59386 ']' 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59386 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59386 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59386' 00:06:07.971 killing process with pid 59386 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59386 00:06:07.971 11:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59386 00:06:10.594 00:06:10.594 real 0m4.454s 00:06:10.594 user 0m12.100s 00:06:10.594 sys 0m0.583s 00:06:10.594 11:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.594 11:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.594 ************************************ 00:06:10.594 END TEST locking_overlapped_coremask 00:06:10.594 ************************************ 00:06:10.594 11:44:36 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:10.594 11:44:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.594 11:44:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.594 11:44:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.594 ************************************ 00:06:10.594 START TEST locking_overlapped_coremask_via_rpc 00:06:10.594 ************************************ 00:06:10.594 11:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:10.594 11:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59474 00:06:10.594 11:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:10.594 11:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59474 /var/tmp/spdk.sock 00:06:10.594 11:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59474 ']' 00:06:10.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.594 11:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.594 11:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.594 11:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.595 11:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.595 11:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.595 [2024-11-27 11:44:36.937904] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:10.595 [2024-11-27 11:44:36.938024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59474 ] 00:06:10.854 [2024-11-27 11:44:37.111340] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.854 [2024-11-27 11:44:37.111425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.854 [2024-11-27 11:44:37.229532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.854 [2024-11-27 11:44:37.229719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.854 [2024-11-27 11:44:37.229774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.793 11:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.793 11:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:11.793 11:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59492 00:06:11.793 11:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:11.793 11:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59492 /var/tmp/spdk2.sock 00:06:11.793 11:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59492 ']' 00:06:11.793 11:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.793 11:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.793 11:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.793 11:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.793 11:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.053 [2024-11-27 11:44:38.200867] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:12.053 [2024-11-27 11:44:38.201073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59492 ] 00:06:12.053 [2024-11-27 11:44:38.370398] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.053 [2024-11-27 11:44:38.370470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.311 [2024-11-27 11:44:38.661876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.311 [2024-11-27 11:44:38.665046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.311 [2024-11-27 11:44:38.665075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.847 [2024-11-27 11:44:40.797127] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59474 has claimed it. 00:06:14.847 request: 00:06:14.847 { 00:06:14.847 "method": "framework_enable_cpumask_locks", 00:06:14.847 "req_id": 1 00:06:14.847 } 00:06:14.847 Got JSON-RPC error response 00:06:14.847 response: 00:06:14.847 { 00:06:14.847 "code": -32603, 00:06:14.847 "message": "Failed to claim CPU core: 2" 00:06:14.847 } 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59474 /var/tmp/spdk.sock 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59474 ']' 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.847 11:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.847 11:44:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.847 11:44:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:14.847 11:44:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59492 /var/tmp/spdk2.sock 00:06:14.847 11:44:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59492 ']' 00:06:14.847 11:44:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.847 11:44:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.847 11:44:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.847 11:44:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.847 11:44:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.106 11:44:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.106 11:44:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:15.106 11:44:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:15.106 11:44:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:15.106 11:44:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:15.106 11:44:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:15.106 00:06:15.106 real 0m4.434s 00:06:15.106 user 0m1.324s 00:06:15.106 sys 0m0.214s 00:06:15.106 11:44:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.106 11:44:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.106 ************************************ 00:06:15.106 END TEST locking_overlapped_coremask_via_rpc 00:06:15.106 ************************************ 00:06:15.106 11:44:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:15.106 11:44:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59474 ]] 00:06:15.106 11:44:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59474 00:06:15.106 11:44:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59474 ']' 00:06:15.106 11:44:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59474 00:06:15.106 11:44:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:15.106 11:44:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.106 11:44:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59474 00:06:15.106 11:44:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.106 11:44:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.106 11:44:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59474' 00:06:15.106 killing process with pid 59474 00:06:15.107 11:44:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59474 00:06:15.107 11:44:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59474 00:06:17.661 11:44:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59492 ]] 00:06:17.661 11:44:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59492 00:06:17.661 11:44:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59492 ']' 00:06:17.661 11:44:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59492 00:06:17.661 11:44:43 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:17.661 11:44:43 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.661 11:44:43 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59492 00:06:17.661 killing process with pid 59492 00:06:17.661 11:44:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:17.661 11:44:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:17.661 11:44:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59492' 00:06:17.661 11:44:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59492 00:06:17.661 11:44:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59492 00:06:20.949 11:44:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:20.949 Process with pid 59474 is not found 00:06:20.949 Process with pid 59492 is not found 00:06:20.949 11:44:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:20.949 11:44:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59474 ]] 00:06:20.949 11:44:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59474 00:06:20.949 11:44:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59474 ']' 00:06:20.949 11:44:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59474 00:06:20.949 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59474) - No such process 00:06:20.949 11:44:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59474 is not found' 00:06:20.949 11:44:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59492 ]] 00:06:20.949 11:44:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59492 00:06:20.949 11:44:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59492 ']' 00:06:20.949 11:44:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59492 00:06:20.949 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59492) - No such process 00:06:20.949 11:44:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59492 is not found' 00:06:20.949 11:44:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:20.949 00:06:20.949 real 0m50.671s 00:06:20.949 user 1m27.025s 00:06:20.949 sys 0m6.765s 00:06:20.949 11:44:46 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.949 11:44:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.949 ************************************ 00:06:20.949 END TEST cpu_locks 00:06:20.949 ************************************ 00:06:20.949 ************************************ 00:06:20.949 END TEST event 00:06:20.949 ************************************ 00:06:20.949 00:06:20.949 real 1m21.033s 00:06:20.949 user 2m24.544s 00:06:20.949 sys 0m11.006s 00:06:20.949 11:44:46 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.949 11:44:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.949 11:44:46 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:20.949 11:44:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.949 11:44:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.949 11:44:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.949 ************************************ 00:06:20.949 START TEST thread 00:06:20.949 ************************************ 00:06:20.950 11:44:46 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:20.950 * Looking for test storage... 00:06:20.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:20.950 11:44:46 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:20.950 11:44:46 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:20.950 11:44:46 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:20.950 11:44:46 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:20.950 11:44:46 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.950 11:44:46 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.950 11:44:46 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.950 11:44:46 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.950 11:44:46 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.950 11:44:46 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.950 11:44:46 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.950 11:44:46 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.950 11:44:46 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.950 11:44:46 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.950 11:44:46 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.950 11:44:46 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:20.950 11:44:46 thread -- scripts/common.sh@345 -- # : 1 00:06:20.950 11:44:46 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.950 11:44:46 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.950 11:44:46 thread -- scripts/common.sh@365 -- # decimal 1 00:06:20.950 11:44:46 thread -- scripts/common.sh@353 -- # local d=1 00:06:20.950 11:44:46 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.950 11:44:46 thread -- scripts/common.sh@355 -- # echo 1 00:06:20.950 11:44:46 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.950 11:44:46 thread -- scripts/common.sh@366 -- # decimal 2 00:06:20.950 11:44:46 thread -- scripts/common.sh@353 -- # local d=2 00:06:20.950 11:44:46 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.950 11:44:46 thread -- scripts/common.sh@355 -- # echo 2 00:06:20.950 11:44:46 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.950 11:44:46 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.950 11:44:46 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.950 11:44:46 thread -- scripts/common.sh@368 -- # return 0 00:06:20.950 11:44:46 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.950 11:44:46 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:20.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.950 --rc genhtml_branch_coverage=1 00:06:20.950 --rc genhtml_function_coverage=1 00:06:20.950 --rc genhtml_legend=1 00:06:20.950 --rc geninfo_all_blocks=1 00:06:20.950 --rc geninfo_unexecuted_blocks=1 00:06:20.950 00:06:20.950 ' 00:06:20.950 11:44:46 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:20.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.950 --rc genhtml_branch_coverage=1 00:06:20.950 --rc genhtml_function_coverage=1 00:06:20.950 --rc genhtml_legend=1 00:06:20.950 --rc geninfo_all_blocks=1 00:06:20.950 --rc geninfo_unexecuted_blocks=1 00:06:20.950 00:06:20.950 ' 00:06:20.950 11:44:46 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:20.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.950 --rc genhtml_branch_coverage=1 00:06:20.950 --rc genhtml_function_coverage=1 00:06:20.950 --rc genhtml_legend=1 00:06:20.950 --rc geninfo_all_blocks=1 00:06:20.950 --rc geninfo_unexecuted_blocks=1 00:06:20.950 00:06:20.950 ' 00:06:20.950 11:44:46 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:20.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.950 --rc genhtml_branch_coverage=1 00:06:20.950 --rc genhtml_function_coverage=1 00:06:20.950 --rc genhtml_legend=1 00:06:20.950 --rc geninfo_all_blocks=1 00:06:20.950 --rc geninfo_unexecuted_blocks=1 00:06:20.950 00:06:20.950 ' 00:06:20.950 11:44:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:20.950 11:44:46 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:20.950 11:44:46 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.950 11:44:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.950 ************************************ 00:06:20.950 START TEST thread_poller_perf 00:06:20.950 ************************************ 00:06:20.950 11:44:46 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:20.950 [2024-11-27 11:44:47.012456] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:20.950 [2024-11-27 11:44:47.012640] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59700 ] 00:06:20.950 [2024-11-27 11:44:47.188751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.950 [2024-11-27 11:44:47.307220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.950 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:22.340 [2024-11-27T11:44:48.725Z] ====================================== 00:06:22.340 [2024-11-27T11:44:48.725Z] busy:2301221926 (cyc) 00:06:22.340 [2024-11-27T11:44:48.725Z] total_run_count: 389000 00:06:22.340 [2024-11-27T11:44:48.725Z] tsc_hz: 2290000000 (cyc) 00:06:22.340 [2024-11-27T11:44:48.725Z] ====================================== 00:06:22.340 [2024-11-27T11:44:48.725Z] poller_cost: 5915 (cyc), 2582 (nsec) 00:06:22.340 00:06:22.340 real 0m1.573s 00:06:22.340 user 0m1.365s 00:06:22.340 sys 0m0.101s 00:06:22.340 11:44:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.340 11:44:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.340 ************************************ 00:06:22.340 END TEST thread_poller_perf 00:06:22.340 ************************************ 00:06:22.340 11:44:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:22.340 11:44:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:22.340 11:44:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.340 11:44:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.340 ************************************ 00:06:22.340 START TEST thread_poller_perf 00:06:22.340 ************************************ 00:06:22.340 11:44:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:22.340 [2024-11-27 11:44:48.657486] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:22.340 [2024-11-27 11:44:48.657649] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59731 ] 00:06:22.599 [2024-11-27 11:44:48.828582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.599 [2024-11-27 11:44:48.944981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.599 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:23.981 [2024-11-27T11:44:50.366Z] ====================================== 00:06:23.981 [2024-11-27T11:44:50.366Z] busy:2293287074 (cyc) 00:06:23.981 [2024-11-27T11:44:50.366Z] total_run_count: 5234000 00:06:23.981 [2024-11-27T11:44:50.366Z] tsc_hz: 2290000000 (cyc) 00:06:23.981 [2024-11-27T11:44:50.366Z] ====================================== 00:06:23.981 [2024-11-27T11:44:50.366Z] poller_cost: 438 (cyc), 191 (nsec) 00:06:23.981 00:06:23.981 real 0m1.561s 00:06:23.981 user 0m1.365s 00:06:23.981 sys 0m0.089s 00:06:23.981 11:44:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.981 ************************************ 00:06:23.981 END TEST thread_poller_perf 00:06:23.981 ************************************ 00:06:23.981 11:44:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.981 11:44:50 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:23.981 ************************************ 00:06:23.981 END TEST thread 00:06:23.981 ************************************ 00:06:23.981 00:06:23.981 real 0m3.494s 00:06:23.981 user 0m2.901s 00:06:23.981 sys 0m0.394s 00:06:23.981 11:44:50 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.981 11:44:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.981 11:44:50 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:23.981 11:44:50 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:23.981 11:44:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.981 11:44:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.981 11:44:50 -- common/autotest_common.sh@10 -- # set +x 00:06:23.981 ************************************ 00:06:23.981 START TEST app_cmdline 00:06:23.981 ************************************ 00:06:23.981 11:44:50 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:24.241 * Looking for test storage... 00:06:24.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:24.241 11:44:50 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.241 11:44:50 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.241 11:44:50 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.241 11:44:50 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.241 11:44:50 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:24.241 11:44:50 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.241 11:44:50 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.241 --rc genhtml_branch_coverage=1 00:06:24.241 --rc genhtml_function_coverage=1 00:06:24.241 --rc genhtml_legend=1 00:06:24.241 --rc geninfo_all_blocks=1 00:06:24.241 --rc geninfo_unexecuted_blocks=1 00:06:24.241 00:06:24.241 ' 00:06:24.241 11:44:50 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.241 --rc genhtml_branch_coverage=1 00:06:24.241 --rc genhtml_function_coverage=1 00:06:24.241 --rc genhtml_legend=1 00:06:24.241 --rc geninfo_all_blocks=1 00:06:24.241 --rc geninfo_unexecuted_blocks=1 00:06:24.241 00:06:24.241 ' 00:06:24.241 11:44:50 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.241 --rc genhtml_branch_coverage=1 00:06:24.241 --rc genhtml_function_coverage=1 00:06:24.241 --rc genhtml_legend=1 00:06:24.241 --rc geninfo_all_blocks=1 00:06:24.241 --rc geninfo_unexecuted_blocks=1 00:06:24.241 00:06:24.241 ' 00:06:24.241 11:44:50 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.241 --rc genhtml_branch_coverage=1 00:06:24.241 --rc genhtml_function_coverage=1 00:06:24.241 --rc genhtml_legend=1 00:06:24.241 --rc geninfo_all_blocks=1 00:06:24.241 --rc geninfo_unexecuted_blocks=1 00:06:24.241 00:06:24.241 ' 00:06:24.241 11:44:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:24.241 11:44:50 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:24.241 11:44:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59820 00:06:24.241 11:44:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59820 00:06:24.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.241 11:44:50 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59820 ']' 00:06:24.241 11:44:50 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.241 11:44:50 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.241 11:44:50 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.241 11:44:50 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.241 11:44:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.241 [2024-11-27 11:44:50.601609] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:24.241 [2024-11-27 11:44:50.602155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59820 ] 00:06:24.502 [2024-11-27 11:44:50.776887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.762 [2024-11-27 11:44:50.893259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.700 11:44:51 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.700 11:44:51 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:25.700 11:44:51 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:25.700 { 00:06:25.700 "version": "SPDK v25.01-pre git sha1 24f0cb4c3", 00:06:25.700 "fields": { 00:06:25.700 "major": 25, 00:06:25.700 "minor": 1, 00:06:25.700 "patch": 0, 00:06:25.700 "suffix": "-pre", 00:06:25.700 "commit": "24f0cb4c3" 00:06:25.700 } 00:06:25.700 } 00:06:25.700 11:44:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:25.700 11:44:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:25.700 11:44:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:25.700 11:44:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:25.700 11:44:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:25.700 11:44:51 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.700 11:44:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:25.700 11:44:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:25.700 11:44:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:25.700 11:44:51 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.700 11:44:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:25.700 11:44:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:25.700 11:44:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:25.700 11:44:51 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:25.700 11:44:51 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:25.700 11:44:51 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:25.700 11:44:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.700 11:44:51 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:25.700 11:44:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.700 11:44:51 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:25.700 11:44:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.700 11:44:52 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:25.701 11:44:52 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:25.701 11:44:52 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:25.960 request: 00:06:25.960 { 00:06:25.960 "method": "env_dpdk_get_mem_stats", 00:06:25.960 "req_id": 1 00:06:25.960 } 00:06:25.960 Got JSON-RPC error response 00:06:25.960 response: 00:06:25.960 { 00:06:25.960 "code": -32601, 00:06:25.960 "message": "Method not found" 00:06:25.960 } 00:06:25.960 11:44:52 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:25.960 11:44:52 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.960 11:44:52 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.960 11:44:52 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.960 11:44:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59820 00:06:25.960 11:44:52 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59820 ']' 00:06:25.960 11:44:52 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59820 00:06:25.960 11:44:52 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:25.960 11:44:52 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.960 11:44:52 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59820 00:06:25.960 killing process with pid 59820 00:06:25.960 11:44:52 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.960 11:44:52 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.960 11:44:52 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59820' 00:06:25.960 11:44:52 app_cmdline -- common/autotest_common.sh@973 -- # kill 59820 00:06:25.960 11:44:52 app_cmdline -- common/autotest_common.sh@978 -- # wait 59820 00:06:28.496 00:06:28.496 real 0m4.339s 00:06:28.496 user 0m4.580s 00:06:28.496 sys 0m0.598s 00:06:28.496 ************************************ 00:06:28.496 END TEST app_cmdline 00:06:28.496 ************************************ 00:06:28.496 11:44:54 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.496 11:44:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.496 11:44:54 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:28.496 11:44:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.496 11:44:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.496 11:44:54 -- common/autotest_common.sh@10 -- # set +x 00:06:28.496 ************************************ 00:06:28.496 START TEST version 00:06:28.496 ************************************ 00:06:28.496 11:44:54 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:28.496 * Looking for test storage... 00:06:28.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:28.496 11:44:54 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:28.496 11:44:54 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:28.496 11:44:54 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:28.755 11:44:54 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:28.756 11:44:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.756 11:44:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.756 11:44:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.756 11:44:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.756 11:44:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.756 11:44:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.756 11:44:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.756 11:44:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.756 11:44:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.756 11:44:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.756 11:44:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.756 11:44:54 version -- scripts/common.sh@344 -- # case "$op" in 00:06:28.756 11:44:54 version -- scripts/common.sh@345 -- # : 1 00:06:28.756 11:44:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.756 11:44:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.756 11:44:54 version -- scripts/common.sh@365 -- # decimal 1 00:06:28.756 11:44:54 version -- scripts/common.sh@353 -- # local d=1 00:06:28.756 11:44:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.756 11:44:54 version -- scripts/common.sh@355 -- # echo 1 00:06:28.756 11:44:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.756 11:44:54 version -- scripts/common.sh@366 -- # decimal 2 00:06:28.756 11:44:54 version -- scripts/common.sh@353 -- # local d=2 00:06:28.756 11:44:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.756 11:44:54 version -- scripts/common.sh@355 -- # echo 2 00:06:28.756 11:44:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.756 11:44:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.756 11:44:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.756 11:44:54 version -- scripts/common.sh@368 -- # return 0 00:06:28.756 11:44:54 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.756 11:44:54 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:28.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.756 --rc genhtml_branch_coverage=1 00:06:28.756 --rc genhtml_function_coverage=1 00:06:28.756 --rc genhtml_legend=1 00:06:28.756 --rc geninfo_all_blocks=1 00:06:28.756 --rc geninfo_unexecuted_blocks=1 00:06:28.756 00:06:28.756 ' 00:06:28.756 11:44:54 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:28.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.756 --rc genhtml_branch_coverage=1 00:06:28.756 --rc genhtml_function_coverage=1 00:06:28.756 --rc genhtml_legend=1 00:06:28.756 --rc geninfo_all_blocks=1 00:06:28.756 --rc geninfo_unexecuted_blocks=1 00:06:28.756 00:06:28.756 ' 00:06:28.756 11:44:54 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:28.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.756 --rc genhtml_branch_coverage=1 00:06:28.756 --rc genhtml_function_coverage=1 00:06:28.756 --rc genhtml_legend=1 00:06:28.756 --rc geninfo_all_blocks=1 00:06:28.756 --rc geninfo_unexecuted_blocks=1 00:06:28.756 00:06:28.756 ' 00:06:28.756 11:44:54 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:28.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.756 --rc genhtml_branch_coverage=1 00:06:28.756 --rc genhtml_function_coverage=1 00:06:28.756 --rc genhtml_legend=1 00:06:28.756 --rc geninfo_all_blocks=1 00:06:28.756 --rc geninfo_unexecuted_blocks=1 00:06:28.756 00:06:28.756 ' 00:06:28.756 11:44:54 version -- app/version.sh@17 -- # get_header_version major 00:06:28.756 11:44:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.756 11:44:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.756 11:44:54 version -- app/version.sh@14 -- # cut -f2 00:06:28.756 11:44:54 version -- app/version.sh@17 -- # major=25 00:06:28.756 11:44:54 version -- app/version.sh@18 -- # get_header_version minor 00:06:28.756 11:44:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.756 11:44:54 version -- app/version.sh@14 -- # cut -f2 00:06:28.756 11:44:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.756 11:44:54 version -- app/version.sh@18 -- # minor=1 00:06:28.756 11:44:54 version -- app/version.sh@19 -- # get_header_version patch 00:06:28.756 11:44:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.756 11:44:54 version -- app/version.sh@14 -- # cut -f2 00:06:28.756 11:44:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.756 11:44:54 version -- app/version.sh@19 -- # patch=0 00:06:28.756 11:44:54 version -- app/version.sh@20 -- # get_header_version suffix 00:06:28.756 11:44:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.756 11:44:54 version -- app/version.sh@14 -- # cut -f2 00:06:28.756 11:44:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.756 11:44:54 version -- app/version.sh@20 -- # suffix=-pre 00:06:28.756 11:44:54 version -- app/version.sh@22 -- # version=25.1 00:06:28.756 11:44:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:28.756 11:44:54 version -- app/version.sh@28 -- # version=25.1rc0 00:06:28.756 11:44:54 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:28.756 11:44:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:28.756 11:44:55 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:28.756 11:44:55 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:28.756 00:06:28.756 real 0m0.332s 00:06:28.756 user 0m0.194s 00:06:28.756 sys 0m0.193s 00:06:28.756 11:44:55 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.756 11:44:55 version -- common/autotest_common.sh@10 -- # set +x 00:06:28.756 ************************************ 00:06:28.756 END TEST version 00:06:28.756 ************************************ 00:06:28.756 11:44:55 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:28.756 11:44:55 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:28.756 11:44:55 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:28.756 11:44:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.756 11:44:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.756 11:44:55 -- common/autotest_common.sh@10 -- # set +x 00:06:28.756 ************************************ 00:06:28.756 START TEST bdev_raid 00:06:28.756 ************************************ 00:06:28.756 11:44:55 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:29.016 * Looking for test storage... 00:06:29.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:29.016 11:44:55 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:29.016 11:44:55 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:06:29.016 11:44:55 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:29.016 11:44:55 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:29.016 11:44:55 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.016 11:44:55 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.016 11:44:55 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.016 11:44:55 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.017 11:44:55 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:29.017 11:44:55 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.017 11:44:55 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.017 --rc genhtml_branch_coverage=1 00:06:29.017 --rc genhtml_function_coverage=1 00:06:29.017 --rc genhtml_legend=1 00:06:29.017 --rc geninfo_all_blocks=1 00:06:29.017 --rc geninfo_unexecuted_blocks=1 00:06:29.017 00:06:29.017 ' 00:06:29.017 11:44:55 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.017 --rc genhtml_branch_coverage=1 00:06:29.017 --rc genhtml_function_coverage=1 00:06:29.017 --rc genhtml_legend=1 00:06:29.017 --rc geninfo_all_blocks=1 00:06:29.017 --rc geninfo_unexecuted_blocks=1 00:06:29.017 00:06:29.017 ' 00:06:29.017 11:44:55 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.017 --rc genhtml_branch_coverage=1 00:06:29.017 --rc genhtml_function_coverage=1 00:06:29.017 --rc genhtml_legend=1 00:06:29.017 --rc geninfo_all_blocks=1 00:06:29.017 --rc geninfo_unexecuted_blocks=1 00:06:29.017 00:06:29.017 ' 00:06:29.017 11:44:55 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.017 --rc genhtml_branch_coverage=1 00:06:29.017 --rc genhtml_function_coverage=1 00:06:29.017 --rc genhtml_legend=1 00:06:29.017 --rc geninfo_all_blocks=1 00:06:29.017 --rc geninfo_unexecuted_blocks=1 00:06:29.017 00:06:29.017 ' 00:06:29.017 11:44:55 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:29.017 11:44:55 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:29.017 11:44:55 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:29.017 11:44:55 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:29.017 11:44:55 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:29.017 11:44:55 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:29.017 11:44:55 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:29.017 11:44:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.017 11:44:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.017 11:44:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:29.017 ************************************ 00:06:29.017 START TEST raid1_resize_data_offset_test 00:06:29.017 ************************************ 00:06:29.017 11:44:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:29.017 11:44:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60013 00:06:29.017 11:44:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60013' 00:06:29.017 Process raid pid: 60013 00:06:29.017 11:44:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:29.017 11:44:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60013 00:06:29.017 11:44:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60013 ']' 00:06:29.017 11:44:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.017 11:44:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.017 11:44:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.017 11:44:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.017 11:44:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.278 [2024-11-27 11:44:55.421793] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:29.278 [2024-11-27 11:44:55.421926] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.278 [2024-11-27 11:44:55.598529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.538 [2024-11-27 11:44:55.714502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.538 [2024-11-27 11:44:55.914777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:29.538 [2024-11-27 11:44:55.914828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:30.108 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.108 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:30.108 11:44:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:30.108 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.108 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.108 malloc0 00:06:30.108 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.108 11:44:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:30.108 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.108 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.108 malloc1 00:06:30.108 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.108 11:44:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:30.109 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.109 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.109 null0 00:06:30.109 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.109 11:44:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:30.109 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.109 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.109 [2024-11-27 11:44:56.444550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:30.109 [2024-11-27 11:44:56.446352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:30.109 [2024-11-27 11:44:56.446406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:30.109 [2024-11-27 11:44:56.446565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:30.109 [2024-11-27 11:44:56.446579] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:30.109 [2024-11-27 11:44:56.446828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:30.109 [2024-11-27 11:44:56.447020] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:30.109 [2024-11-27 11:44:56.447033] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:30.109 [2024-11-27 11:44:56.447166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:30.109 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.109 11:44:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:30.109 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.109 11:44:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:30.109 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.109 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.369 11:44:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:30.369 11:44:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:30.369 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.369 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.369 [2024-11-27 11:44:56.504476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:30.369 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.369 11:44:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:30.369 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.369 11:44:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.939 malloc2 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.939 [2024-11-27 11:44:57.041674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:30.939 [2024-11-27 11:44:57.058475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.939 [2024-11-27 11:44:57.060301] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60013 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60013 ']' 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60013 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60013 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.939 killing process with pid 60013 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60013' 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60013 00:06:30.939 11:44:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60013 00:06:30.939 [2024-11-27 11:44:57.151544] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:30.939 [2024-11-27 11:44:57.152974] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:30.939 [2024-11-27 11:44:57.153049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:30.939 [2024-11-27 11:44:57.153069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:30.939 [2024-11-27 11:44:57.190804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:30.939 [2024-11-27 11:44:57.191156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:30.939 [2024-11-27 11:44:57.191198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:32.848 [2024-11-27 11:44:58.984802] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:33.788 11:45:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:33.788 00:06:33.788 real 0m4.784s 00:06:33.788 user 0m4.699s 00:06:33.788 sys 0m0.538s 00:06:33.788 11:45:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.788 11:45:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.788 ************************************ 00:06:33.788 END TEST raid1_resize_data_offset_test 00:06:33.788 ************************************ 00:06:34.048 11:45:00 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:34.048 11:45:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:34.048 11:45:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.048 11:45:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:34.048 ************************************ 00:06:34.048 START TEST raid0_resize_superblock_test 00:06:34.048 ************************************ 00:06:34.048 11:45:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:34.048 11:45:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:34.048 11:45:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60091 00:06:34.048 11:45:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:34.048 11:45:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60091' 00:06:34.048 Process raid pid: 60091 00:06:34.048 11:45:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60091 00:06:34.048 11:45:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60091 ']' 00:06:34.048 11:45:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.048 11:45:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.048 11:45:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.048 11:45:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.048 11:45:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.048 [2024-11-27 11:45:00.273083] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:34.048 [2024-11-27 11:45:00.273246] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.308 [2024-11-27 11:45:00.450979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.308 [2024-11-27 11:45:00.571491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.567 [2024-11-27 11:45:00.779466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:34.567 [2024-11-27 11:45:00.779575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:34.827 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.827 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:34.827 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:34.827 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.827 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.397 malloc0 00:06:35.397 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.397 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:35.397 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.397 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.397 [2024-11-27 11:45:01.653488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:35.397 [2024-11-27 11:45:01.653609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:35.397 [2024-11-27 11:45:01.653653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:35.397 [2024-11-27 11:45:01.653669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:35.397 [2024-11-27 11:45:01.655937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:35.397 [2024-11-27 11:45:01.655984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:35.397 pt0 00:06:35.397 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.397 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:35.397 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.397 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.397 b80f49b2-6d48-4734-ad78-49aad08c9e77 00:06:35.397 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.397 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:35.397 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.397 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.397 1a397e54-6ed4-4f10-b9e7-089dc47fd4d8 00:06:35.397 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.397 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:35.397 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.397 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.656 8354c3ee-3bb0-4281-9571-941cf950a442 00:06:35.656 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.656 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:35.656 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.657 [2024-11-27 11:45:01.789404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1a397e54-6ed4-4f10-b9e7-089dc47fd4d8 is claimed 00:06:35.657 [2024-11-27 11:45:01.789487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8354c3ee-3bb0-4281-9571-941cf950a442 is claimed 00:06:35.657 [2024-11-27 11:45:01.789606] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:35.657 [2024-11-27 11:45:01.789620] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:35.657 [2024-11-27 11:45:01.789892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:35.657 [2024-11-27 11:45:01.790101] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:35.657 [2024-11-27 11:45:01.790113] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:35.657 [2024-11-27 11:45:01.790266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.657 [2024-11-27 11:45:01.901437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.657 [2024-11-27 11:45:01.941303] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:35.657 [2024-11-27 11:45:01.941330] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '1a397e54-6ed4-4f10-b9e7-089dc47fd4d8' was resized: old size 131072, new size 204800 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.657 [2024-11-27 11:45:01.953220] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:35.657 [2024-11-27 11:45:01.953244] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8354c3ee-3bb0-4281-9571-941cf950a442' was resized: old size 131072, new size 204800 00:06:35.657 [2024-11-27 11:45:01.953272] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.657 11:45:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.657 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:35.657 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:35.657 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:35.657 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.657 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.657 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.918 [2024-11-27 11:45:02.065223] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.918 [2024-11-27 11:45:02.112946] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:35.918 [2024-11-27 11:45:02.113084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:35.918 [2024-11-27 11:45:02.113119] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:35.918 [2024-11-27 11:45:02.113158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:35.918 [2024-11-27 11:45:02.113301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:35.918 [2024-11-27 11:45:02.113370] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:35.918 [2024-11-27 11:45:02.113418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.918 [2024-11-27 11:45:02.124763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:35.918 [2024-11-27 11:45:02.124858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:35.918 [2024-11-27 11:45:02.124881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:35.918 [2024-11-27 11:45:02.124891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:35.918 [2024-11-27 11:45:02.127060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:35.918 [2024-11-27 11:45:02.127096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:35.918 [2024-11-27 11:45:02.128781] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 1a397e54-6ed4-4f10-b9e7-089dc47fd4d8 00:06:35.918 [2024-11-27 11:45:02.128937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1a397e54-6ed4-4f10-b9e7-089dc47fd4d8 is claimed 00:06:35.918 [2024-11-27 11:45:02.129055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8354c3ee-3bb0-4281-9571-941cf950a442 00:06:35.918 [2024-11-27 11:45:02.129075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8354c3ee-3bb0-4281-9571-941cf950a442 is claimed 00:06:35.918 [2024-11-27 11:45:02.129241] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 8354c3ee-3bb0-4281-9571-941cf950a442 (2) smaller than existing raid bdev Raid (3) 00:06:35.918 [2024-11-27 11:45:02.129266] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 1a397e54-6ed4-4f10-b9e7-089dc47fd4d8: File exists 00:06:35.918 [2024-11-27 11:45:02.129302] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:35.918 [2024-11-27 11:45:02.129313] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:35.918 pt0 00:06:35.918 [2024-11-27 11:45:02.129570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:35.918 [2024-11-27 11:45:02.129722] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:35.918 [2024-11-27 11:45:02.129737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:35.918 [2024-11-27 11:45:02.129897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:35.918 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.919 [2024-11-27 11:45:02.153047] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60091 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60091 ']' 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60091 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60091 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60091' 00:06:35.919 killing process with pid 60091 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60091 00:06:35.919 [2024-11-27 11:45:02.232940] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:35.919 [2024-11-27 11:45:02.233122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:35.919 11:45:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60091 00:06:35.919 [2024-11-27 11:45:02.233221] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:35.919 [2024-11-27 11:45:02.233269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:37.301 [2024-11-27 11:45:03.662163] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:38.680 11:45:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:38.680 00:06:38.680 real 0m4.597s 00:06:38.680 user 0m4.805s 00:06:38.680 sys 0m0.582s 00:06:38.680 11:45:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.680 ************************************ 00:06:38.680 END TEST raid0_resize_superblock_test 00:06:38.680 ************************************ 00:06:38.680 11:45:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.680 11:45:04 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:38.680 11:45:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:38.680 11:45:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.680 11:45:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:38.680 ************************************ 00:06:38.680 START TEST raid1_resize_superblock_test 00:06:38.680 ************************************ 00:06:38.680 Process raid pid: 60195 00:06:38.680 11:45:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:38.680 11:45:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:38.680 11:45:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60195 00:06:38.680 11:45:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:38.680 11:45:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60195' 00:06:38.680 11:45:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60195 00:06:38.680 11:45:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60195 ']' 00:06:38.680 11:45:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.680 11:45:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.680 11:45:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.680 11:45:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.680 11:45:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.680 [2024-11-27 11:45:04.932248] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:38.680 [2024-11-27 11:45:04.932455] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.940 [2024-11-27 11:45:05.107931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.940 [2024-11-27 11:45:05.222603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.199 [2024-11-27 11:45:05.428398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:39.199 [2024-11-27 11:45:05.428494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:39.458 11:45:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.458 11:45:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:39.458 11:45:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:39.458 11:45:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.458 11:45:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.028 malloc0 00:06:40.028 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.028 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:40.028 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.028 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.028 [2024-11-27 11:45:06.305562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:40.028 [2024-11-27 11:45:06.305690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:40.028 [2024-11-27 11:45:06.305735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:40.028 [2024-11-27 11:45:06.305776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:40.028 [2024-11-27 11:45:06.307924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:40.028 [2024-11-27 11:45:06.308003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:40.028 pt0 00:06:40.028 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.028 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:40.028 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.028 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.028 c9da373d-9fe8-4092-80ac-3cb1f826344c 00:06:40.028 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.288 773f240f-76a1-4c3d-b052-4caa62d8f5c8 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.288 463a3084-be5b-4363-9518-672301103daa 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.288 [2024-11-27 11:45:06.441010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 773f240f-76a1-4c3d-b052-4caa62d8f5c8 is claimed 00:06:40.288 [2024-11-27 11:45:06.441102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 463a3084-be5b-4363-9518-672301103daa is claimed 00:06:40.288 [2024-11-27 11:45:06.441238] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:40.288 [2024-11-27 11:45:06.441253] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:40.288 [2024-11-27 11:45:06.441513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:40.288 [2024-11-27 11:45:06.441704] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:40.288 [2024-11-27 11:45:06.441715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:40.288 [2024-11-27 11:45:06.441896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:40.288 [2024-11-27 11:45:06.553068] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.288 [2024-11-27 11:45:06.600958] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:40.288 [2024-11-27 11:45:06.600986] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '773f240f-76a1-4c3d-b052-4caa62d8f5c8' was resized: old size 131072, new size 204800 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.288 [2024-11-27 11:45:06.612823] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:40.288 [2024-11-27 11:45:06.612860] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '463a3084-be5b-4363-9518-672301103daa' was resized: old size 131072, new size 204800 00:06:40.288 [2024-11-27 11:45:06.612887] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.288 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.549 [2024-11-27 11:45:06.724759] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.549 [2024-11-27 11:45:06.772471] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:40.549 [2024-11-27 11:45:06.772605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:40.549 [2024-11-27 11:45:06.772670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:40.549 [2024-11-27 11:45:06.772879] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:40.549 [2024-11-27 11:45:06.773140] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:40.549 [2024-11-27 11:45:06.773275] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:40.549 [2024-11-27 11:45:06.773335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.549 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.549 [2024-11-27 11:45:06.784353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:40.549 [2024-11-27 11:45:06.784447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:40.549 [2024-11-27 11:45:06.784497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:40.549 [2024-11-27 11:45:06.784539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:40.549 [2024-11-27 11:45:06.786798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:40.549 [2024-11-27 11:45:06.786889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:40.549 [2024-11-27 11:45:06.788608] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 773f240f-76a1-4c3d-b052-4caa62d8f5c8 00:06:40.549 [2024-11-27 11:45:06.788757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 773f240f-76a1-4c3d-b052-4caa62d8f5c8 is claimed 00:06:40.549 [2024-11-27 11:45:06.788947] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 463a3084-be5b-4363-9518-672301103daa 00:06:40.549 [2024-11-27 11:45:06.789024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 463a3084-be5b-4363-9518-672301103daa is claimed 00:06:40.549 [2024-11-27 11:45:06.789254] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 463a3084-be5b-4363-9518-672301103daa (2) smaller than existing raid bdev Raid (3) 00:06:40.549 [2024-11-27 11:45:06.789330] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 773f240f-76a1-4c3d-b052-4caa62d8f5c8: File exists 00:06:40.549 [2024-11-27 11:45:06.789441] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:40.550 [2024-11-27 11:45:06.789482] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:40.550 pt0 00:06:40.550 [2024-11-27 11:45:06.789762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:40.550 [2024-11-27 11:45:06.789939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:40.550 [2024-11-27 11:45:06.789995] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:40.550 [2024-11-27 11:45:06.790197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.550 [2024-11-27 11:45:06.812882] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60195 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60195 ']' 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60195 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60195 00:06:40.550 killing process with pid 60195 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60195' 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60195 00:06:40.550 [2024-11-27 11:45:06.898451] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:40.550 [2024-11-27 11:45:06.898559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:40.550 11:45:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60195 00:06:40.550 [2024-11-27 11:45:06.898625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:40.550 [2024-11-27 11:45:06.898636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:42.458 [2024-11-27 11:45:08.331174] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:43.392 ************************************ 00:06:43.392 END TEST raid1_resize_superblock_test 00:06:43.392 ************************************ 00:06:43.392 11:45:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:43.392 00:06:43.392 real 0m4.618s 00:06:43.392 user 0m4.867s 00:06:43.392 sys 0m0.541s 00:06:43.392 11:45:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.392 11:45:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.392 11:45:09 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:43.392 11:45:09 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:43.392 11:45:09 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:43.392 11:45:09 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:43.392 11:45:09 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:43.392 11:45:09 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:43.392 11:45:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:43.392 11:45:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.392 11:45:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:43.392 ************************************ 00:06:43.392 START TEST raid_function_test_raid0 00:06:43.392 ************************************ 00:06:43.392 11:45:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:43.392 11:45:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:43.392 11:45:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:43.392 11:45:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:43.392 11:45:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60298 00:06:43.392 11:45:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:43.392 11:45:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60298' 00:06:43.392 Process raid pid: 60298 00:06:43.392 11:45:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60298 00:06:43.392 11:45:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60298 ']' 00:06:43.392 11:45:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.392 11:45:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.392 11:45:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.392 11:45:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.392 11:45:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:43.392 [2024-11-27 11:45:09.648719] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:43.392 [2024-11-27 11:45:09.649569] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.650 [2024-11-27 11:45:09.825420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.650 [2024-11-27 11:45:09.942460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.910 [2024-11-27 11:45:10.149571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:43.910 [2024-11-27 11:45:10.149663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.170 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.170 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:44.170 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:44.170 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.170 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:44.170 Base_1 00:06:44.170 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.170 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:44.431 Base_2 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:44.431 [2024-11-27 11:45:10.607960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:44.431 [2024-11-27 11:45:10.609960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:44.431 [2024-11-27 11:45:10.610030] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:44.431 [2024-11-27 11:45:10.610041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:44.431 [2024-11-27 11:45:10.610305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:44.431 [2024-11-27 11:45:10.610452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:44.431 [2024-11-27 11:45:10.610461] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:44.431 [2024-11-27 11:45:10.610607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:44.431 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:44.690 [2024-11-27 11:45:10.831786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:44.690 /dev/nbd0 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:44.690 1+0 records in 00:06:44.690 1+0 records out 00:06:44.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003912 s, 10.5 MB/s 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:44.690 11:45:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:44.949 { 00:06:44.949 "nbd_device": "/dev/nbd0", 00:06:44.949 "bdev_name": "raid" 00:06:44.949 } 00:06:44.949 ]' 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:44.949 { 00:06:44.949 "nbd_device": "/dev/nbd0", 00:06:44.949 "bdev_name": "raid" 00:06:44.949 } 00:06:44.949 ]' 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:44.949 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:44.950 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:44.950 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:44.950 4096+0 records in 00:06:44.950 4096+0 records out 00:06:44.950 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0359084 s, 58.4 MB/s 00:06:44.950 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:45.210 4096+0 records in 00:06:45.210 4096+0 records out 00:06:45.210 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.183622 s, 11.4 MB/s 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:45.210 128+0 records in 00:06:45.210 128+0 records out 00:06:45.210 65536 bytes (66 kB, 64 KiB) copied, 0.000973247 s, 67.3 MB/s 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:45.210 2035+0 records in 00:06:45.210 2035+0 records out 00:06:45.210 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0127378 s, 81.8 MB/s 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:45.210 456+0 records in 00:06:45.210 456+0 records out 00:06:45.210 233472 bytes (233 kB, 228 KiB) copied, 0.00345551 s, 67.6 MB/s 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.210 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:45.496 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.496 [2024-11-27 11:45:11.748549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:45.496 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.496 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.496 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.496 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.496 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.496 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:45.496 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.496 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:45.496 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:45.496 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:45.756 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:45.756 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:45.756 11:45:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60298 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60298 ']' 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60298 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60298 00:06:45.756 killing process with pid 60298 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60298' 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60298 00:06:45.756 [2024-11-27 11:45:12.061036] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:45.756 [2024-11-27 11:45:12.061142] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:45.756 [2024-11-27 11:45:12.061189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:45.756 [2024-11-27 11:45:12.061204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:45.756 11:45:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60298 00:06:46.016 [2024-11-27 11:45:12.268902] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:47.397 ************************************ 00:06:47.397 END TEST raid_function_test_raid0 00:06:47.397 ************************************ 00:06:47.397 11:45:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:47.397 00:06:47.397 real 0m3.823s 00:06:47.397 user 0m4.429s 00:06:47.397 sys 0m0.961s 00:06:47.397 11:45:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.397 11:45:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:47.397 11:45:13 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:47.397 11:45:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:47.397 11:45:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.397 11:45:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:47.397 ************************************ 00:06:47.397 START TEST raid_function_test_concat 00:06:47.397 ************************************ 00:06:47.397 11:45:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:47.397 11:45:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:47.397 11:45:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:47.397 11:45:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:47.397 Process raid pid: 60421 00:06:47.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.397 11:45:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60421 00:06:47.397 11:45:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:47.397 11:45:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60421' 00:06:47.398 11:45:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60421 00:06:47.398 11:45:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60421 ']' 00:06:47.398 11:45:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.398 11:45:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.398 11:45:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.398 11:45:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.398 11:45:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:47.398 [2024-11-27 11:45:13.534109] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:47.398 [2024-11-27 11:45:13.534669] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:47.398 [2024-11-27 11:45:13.710511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.657 [2024-11-27 11:45:13.825512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.657 [2024-11-27 11:45:14.023863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.657 [2024-11-27 11:45:14.023983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:48.228 Base_1 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:48.228 Base_2 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:48.228 [2024-11-27 11:45:14.457426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:48.228 [2024-11-27 11:45:14.459190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:48.228 [2024-11-27 11:45:14.459271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:48.228 [2024-11-27 11:45:14.459286] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:48.228 [2024-11-27 11:45:14.459522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:48.228 [2024-11-27 11:45:14.459678] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:48.228 [2024-11-27 11:45:14.459687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:48.228 [2024-11-27 11:45:14.459828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:48.228 11:45:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:48.489 [2024-11-27 11:45:14.681131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:48.489 /dev/nbd0 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.489 1+0 records in 00:06:48.489 1+0 records out 00:06:48.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503505 s, 8.1 MB/s 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:48.489 11:45:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:48.749 11:45:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:48.749 { 00:06:48.749 "nbd_device": "/dev/nbd0", 00:06:48.749 "bdev_name": "raid" 00:06:48.749 } 00:06:48.749 ]' 00:06:48.749 11:45:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:48.749 { 00:06:48.749 "nbd_device": "/dev/nbd0", 00:06:48.749 "bdev_name": "raid" 00:06:48.749 } 00:06:48.749 ]' 00:06:48.749 11:45:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:48.749 4096+0 records in 00:06:48.749 4096+0 records out 00:06:48.749 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0279749 s, 75.0 MB/s 00:06:48.749 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:49.010 4096+0 records in 00:06:49.010 4096+0 records out 00:06:49.010 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.191817 s, 10.9 MB/s 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:49.010 128+0 records in 00:06:49.010 128+0 records out 00:06:49.010 65536 bytes (66 kB, 64 KiB) copied, 0.00126358 s, 51.9 MB/s 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:49.010 2035+0 records in 00:06:49.010 2035+0 records out 00:06:49.010 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0121735 s, 85.6 MB/s 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:49.010 456+0 records in 00:06:49.010 456+0 records out 00:06:49.010 233472 bytes (233 kB, 228 KiB) copied, 0.00384147 s, 60.8 MB/s 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:49.010 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.011 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:49.011 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.011 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:49.270 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:49.270 [2024-11-27 11:45:15.570329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.270 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:49.270 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:49.270 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.270 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.270 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:49.270 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:49.270 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.270 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:49.270 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:49.270 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60421 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60421 ']' 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60421 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60421 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60421' 00:06:49.530 killing process with pid 60421 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60421 00:06:49.530 [2024-11-27 11:45:15.889634] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:49.530 [2024-11-27 11:45:15.889788] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:49.530 [2024-11-27 11:45:15.889885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:49.530 [2024-11-27 11:45:15.889940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:49.530 11:45:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60421 00:06:49.789 [2024-11-27 11:45:16.095140] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:51.168 11:45:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:51.168 00:06:51.168 real 0m3.758s 00:06:51.169 user 0m4.308s 00:06:51.169 sys 0m0.942s 00:06:51.169 11:45:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.169 11:45:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:51.169 ************************************ 00:06:51.169 END TEST raid_function_test_concat 00:06:51.169 ************************************ 00:06:51.169 11:45:17 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:51.169 11:45:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:51.169 11:45:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.169 11:45:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:51.169 ************************************ 00:06:51.169 START TEST raid0_resize_test 00:06:51.169 ************************************ 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:51.169 Process raid pid: 60544 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60544 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60544' 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60544 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60544 ']' 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.169 11:45:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.169 [2024-11-27 11:45:17.368820] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:51.169 [2024-11-27 11:45:17.369023] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.169 [2024-11-27 11:45:17.542146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.428 [2024-11-27 11:45:17.659061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.687 [2024-11-27 11:45:17.863787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.687 [2024-11-27 11:45:17.863818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.947 Base_1 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.947 Base_2 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.947 [2024-11-27 11:45:18.212324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:51.947 [2024-11-27 11:45:18.214270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:51.947 [2024-11-27 11:45:18.214321] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:51.947 [2024-11-27 11:45:18.214332] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:51.947 [2024-11-27 11:45:18.214558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:51.947 [2024-11-27 11:45:18.214662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:51.947 [2024-11-27 11:45:18.214669] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:51.947 [2024-11-27 11:45:18.214787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.947 [2024-11-27 11:45:18.220287] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:51.947 [2024-11-27 11:45:18.220313] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:51.947 true 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:51.947 [2024-11-27 11:45:18.232445] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.947 [2024-11-27 11:45:18.280185] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:51.947 [2024-11-27 11:45:18.280209] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:51.947 [2024-11-27 11:45:18.280238] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:51.947 true 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.947 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.947 [2024-11-27 11:45:18.296343] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:51.948 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.209 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:52.209 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:52.209 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:52.209 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:52.209 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:52.209 11:45:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60544 00:06:52.209 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60544 ']' 00:06:52.209 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60544 00:06:52.209 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:52.209 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.209 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60544 00:06:52.209 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.209 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.209 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60544' 00:06:52.209 killing process with pid 60544 00:06:52.209 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60544 00:06:52.209 [2024-11-27 11:45:18.380491] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:52.209 [2024-11-27 11:45:18.380639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.209 11:45:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60544 00:06:52.209 [2024-11-27 11:45:18.380730] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.209 [2024-11-27 11:45:18.380742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:52.209 [2024-11-27 11:45:18.397452] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:53.155 11:45:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:53.155 ************************************ 00:06:53.155 END TEST raid0_resize_test 00:06:53.155 ************************************ 00:06:53.155 00:06:53.155 real 0m2.212s 00:06:53.155 user 0m2.353s 00:06:53.155 sys 0m0.325s 00:06:53.155 11:45:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.155 11:45:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.415 11:45:19 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:53.415 11:45:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:53.415 11:45:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.415 11:45:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:53.415 ************************************ 00:06:53.415 START TEST raid1_resize_test 00:06:53.415 ************************************ 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60600 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60600' 00:06:53.415 Process raid pid: 60600 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60600 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60600 ']' 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.415 11:45:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.415 [2024-11-27 11:45:19.649477] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:53.415 [2024-11-27 11:45:19.650067] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.675 [2024-11-27 11:45:19.821424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.675 [2024-11-27 11:45:19.932172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.934 [2024-11-27 11:45:20.131460] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.934 [2024-11-27 11:45:20.131595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.194 Base_1 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.194 Base_2 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.194 [2024-11-27 11:45:20.504180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:54.194 [2024-11-27 11:45:20.505927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:54.194 [2024-11-27 11:45:20.505987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:54.194 [2024-11-27 11:45:20.505998] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:54.194 [2024-11-27 11:45:20.506229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:54.194 [2024-11-27 11:45:20.506349] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:54.194 [2024-11-27 11:45:20.506357] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:54.194 [2024-11-27 11:45:20.506505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.194 [2024-11-27 11:45:20.516145] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:54.194 [2024-11-27 11:45:20.516221] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:54.194 true 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.194 [2024-11-27 11:45:20.528318] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.194 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.194 [2024-11-27 11:45:20.576042] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:54.194 [2024-11-27 11:45:20.576077] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:54.194 [2024-11-27 11:45:20.576109] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:54.453 true 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.453 [2024-11-27 11:45:20.588194] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60600 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60600 ']' 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60600 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60600 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60600' 00:06:54.453 killing process with pid 60600 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60600 00:06:54.453 [2024-11-27 11:45:20.669916] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:54.453 [2024-11-27 11:45:20.670047] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.453 11:45:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60600 00:06:54.453 [2024-11-27 11:45:20.670582] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:54.453 [2024-11-27 11:45:20.670659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:54.453 [2024-11-27 11:45:20.687899] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:55.832 11:45:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:55.832 00:06:55.832 real 0m2.239s 00:06:55.832 user 0m2.359s 00:06:55.832 sys 0m0.339s 00:06:55.832 11:45:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.832 11:45:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.832 ************************************ 00:06:55.832 END TEST raid1_resize_test 00:06:55.832 ************************************ 00:06:55.832 11:45:21 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:55.832 11:45:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:55.832 11:45:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:55.832 11:45:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:55.832 11:45:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.832 11:45:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.832 ************************************ 00:06:55.832 START TEST raid_state_function_test 00:06:55.832 ************************************ 00:06:55.832 11:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:06:55.832 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:55.832 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:55.832 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:55.832 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:55.832 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:55.832 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:55.832 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:55.832 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:55.832 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:55.832 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:55.832 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:55.832 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:55.832 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:55.832 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:55.832 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:55.833 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:55.833 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:55.833 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:55.833 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:55.833 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:55.833 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:55.833 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:55.833 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:55.833 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60666 00:06:55.833 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:55.833 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60666' 00:06:55.833 Process raid pid: 60666 00:06:55.833 11:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60666 00:06:55.833 11:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60666 ']' 00:06:55.833 11:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.833 11:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.833 11:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.833 11:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.833 11:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.833 [2024-11-27 11:45:21.973498] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:06:55.833 [2024-11-27 11:45:21.973708] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.833 [2024-11-27 11:45:22.150513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.092 [2024-11-27 11:45:22.263673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.092 [2024-11-27 11:45:22.473721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.092 [2024-11-27 11:45:22.473757] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.661 [2024-11-27 11:45:22.805795] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:56.661 [2024-11-27 11:45:22.805855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:56.661 [2024-11-27 11:45:22.805866] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:56.661 [2024-11-27 11:45:22.805876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.661 "name": "Existed_Raid", 00:06:56.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.661 "strip_size_kb": 64, 00:06:56.661 "state": "configuring", 00:06:56.661 "raid_level": "raid0", 00:06:56.661 "superblock": false, 00:06:56.661 "num_base_bdevs": 2, 00:06:56.661 "num_base_bdevs_discovered": 0, 00:06:56.661 "num_base_bdevs_operational": 2, 00:06:56.661 "base_bdevs_list": [ 00:06:56.661 { 00:06:56.661 "name": "BaseBdev1", 00:06:56.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.661 "is_configured": false, 00:06:56.661 "data_offset": 0, 00:06:56.661 "data_size": 0 00:06:56.661 }, 00:06:56.661 { 00:06:56.661 "name": "BaseBdev2", 00:06:56.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.661 "is_configured": false, 00:06:56.661 "data_offset": 0, 00:06:56.661 "data_size": 0 00:06:56.661 } 00:06:56.661 ] 00:06:56.661 }' 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.661 11:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.920 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:56.920 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.920 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.920 [2024-11-27 11:45:23.248994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:56.920 [2024-11-27 11:45:23.249086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:56.920 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.920 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:56.920 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.920 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.920 [2024-11-27 11:45:23.260984] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:56.920 [2024-11-27 11:45:23.261066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:56.920 [2024-11-27 11:45:23.261094] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:56.920 [2024-11-27 11:45:23.261118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:56.920 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.920 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:56.920 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.920 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.191 [2024-11-27 11:45:23.310664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:57.191 BaseBdev1 00:06:57.191 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.191 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:57.191 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:57.191 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:57.191 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:57.191 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:57.191 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:57.191 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:57.191 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.191 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.191 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.191 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:57.191 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.191 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.191 [ 00:06:57.191 { 00:06:57.191 "name": "BaseBdev1", 00:06:57.191 "aliases": [ 00:06:57.191 "1a634937-4992-4db1-b1b6-1752645ccd20" 00:06:57.191 ], 00:06:57.191 "product_name": "Malloc disk", 00:06:57.191 "block_size": 512, 00:06:57.191 "num_blocks": 65536, 00:06:57.191 "uuid": "1a634937-4992-4db1-b1b6-1752645ccd20", 00:06:57.191 "assigned_rate_limits": { 00:06:57.191 "rw_ios_per_sec": 0, 00:06:57.191 "rw_mbytes_per_sec": 0, 00:06:57.191 "r_mbytes_per_sec": 0, 00:06:57.191 "w_mbytes_per_sec": 0 00:06:57.192 }, 00:06:57.192 "claimed": true, 00:06:57.192 "claim_type": "exclusive_write", 00:06:57.192 "zoned": false, 00:06:57.192 "supported_io_types": { 00:06:57.192 "read": true, 00:06:57.192 "write": true, 00:06:57.192 "unmap": true, 00:06:57.192 "flush": true, 00:06:57.192 "reset": true, 00:06:57.192 "nvme_admin": false, 00:06:57.192 "nvme_io": false, 00:06:57.192 "nvme_io_md": false, 00:06:57.192 "write_zeroes": true, 00:06:57.192 "zcopy": true, 00:06:57.192 "get_zone_info": false, 00:06:57.192 "zone_management": false, 00:06:57.192 "zone_append": false, 00:06:57.192 "compare": false, 00:06:57.192 "compare_and_write": false, 00:06:57.192 "abort": true, 00:06:57.192 "seek_hole": false, 00:06:57.192 "seek_data": false, 00:06:57.192 "copy": true, 00:06:57.192 "nvme_iov_md": false 00:06:57.192 }, 00:06:57.192 "memory_domains": [ 00:06:57.192 { 00:06:57.192 "dma_device_id": "system", 00:06:57.192 "dma_device_type": 1 00:06:57.192 }, 00:06:57.192 { 00:06:57.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.192 "dma_device_type": 2 00:06:57.192 } 00:06:57.192 ], 00:06:57.192 "driver_specific": {} 00:06:57.192 } 00:06:57.192 ] 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.192 "name": "Existed_Raid", 00:06:57.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.192 "strip_size_kb": 64, 00:06:57.192 "state": "configuring", 00:06:57.192 "raid_level": "raid0", 00:06:57.192 "superblock": false, 00:06:57.192 "num_base_bdevs": 2, 00:06:57.192 "num_base_bdevs_discovered": 1, 00:06:57.192 "num_base_bdevs_operational": 2, 00:06:57.192 "base_bdevs_list": [ 00:06:57.192 { 00:06:57.192 "name": "BaseBdev1", 00:06:57.192 "uuid": "1a634937-4992-4db1-b1b6-1752645ccd20", 00:06:57.192 "is_configured": true, 00:06:57.192 "data_offset": 0, 00:06:57.192 "data_size": 65536 00:06:57.192 }, 00:06:57.192 { 00:06:57.192 "name": "BaseBdev2", 00:06:57.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.192 "is_configured": false, 00:06:57.192 "data_offset": 0, 00:06:57.192 "data_size": 0 00:06:57.192 } 00:06:57.192 ] 00:06:57.192 }' 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.192 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.452 [2024-11-27 11:45:23.749974] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:57.452 [2024-11-27 11:45:23.750087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.452 [2024-11-27 11:45:23.758001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:57.452 [2024-11-27 11:45:23.759984] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:57.452 [2024-11-27 11:45:23.760075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.452 "name": "Existed_Raid", 00:06:57.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.452 "strip_size_kb": 64, 00:06:57.452 "state": "configuring", 00:06:57.452 "raid_level": "raid0", 00:06:57.452 "superblock": false, 00:06:57.452 "num_base_bdevs": 2, 00:06:57.452 "num_base_bdevs_discovered": 1, 00:06:57.452 "num_base_bdevs_operational": 2, 00:06:57.452 "base_bdevs_list": [ 00:06:57.452 { 00:06:57.452 "name": "BaseBdev1", 00:06:57.452 "uuid": "1a634937-4992-4db1-b1b6-1752645ccd20", 00:06:57.452 "is_configured": true, 00:06:57.452 "data_offset": 0, 00:06:57.452 "data_size": 65536 00:06:57.452 }, 00:06:57.452 { 00:06:57.452 "name": "BaseBdev2", 00:06:57.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.452 "is_configured": false, 00:06:57.452 "data_offset": 0, 00:06:57.452 "data_size": 0 00:06:57.452 } 00:06:57.452 ] 00:06:57.452 }' 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.452 11:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.022 [2024-11-27 11:45:24.227357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:58.022 [2024-11-27 11:45:24.227476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:58.022 [2024-11-27 11:45:24.227502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:58.022 [2024-11-27 11:45:24.227841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:58.022 [2024-11-27 11:45:24.228129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:58.022 [2024-11-27 11:45:24.228185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:58.022 [2024-11-27 11:45:24.228523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:58.022 BaseBdev2 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.022 [ 00:06:58.022 { 00:06:58.022 "name": "BaseBdev2", 00:06:58.022 "aliases": [ 00:06:58.022 "f9614e00-1dab-4b3a-8af0-c4be26450fc4" 00:06:58.022 ], 00:06:58.022 "product_name": "Malloc disk", 00:06:58.022 "block_size": 512, 00:06:58.022 "num_blocks": 65536, 00:06:58.022 "uuid": "f9614e00-1dab-4b3a-8af0-c4be26450fc4", 00:06:58.022 "assigned_rate_limits": { 00:06:58.022 "rw_ios_per_sec": 0, 00:06:58.022 "rw_mbytes_per_sec": 0, 00:06:58.022 "r_mbytes_per_sec": 0, 00:06:58.022 "w_mbytes_per_sec": 0 00:06:58.022 }, 00:06:58.022 "claimed": true, 00:06:58.022 "claim_type": "exclusive_write", 00:06:58.022 "zoned": false, 00:06:58.022 "supported_io_types": { 00:06:58.022 "read": true, 00:06:58.022 "write": true, 00:06:58.022 "unmap": true, 00:06:58.022 "flush": true, 00:06:58.022 "reset": true, 00:06:58.022 "nvme_admin": false, 00:06:58.022 "nvme_io": false, 00:06:58.022 "nvme_io_md": false, 00:06:58.022 "write_zeroes": true, 00:06:58.022 "zcopy": true, 00:06:58.022 "get_zone_info": false, 00:06:58.022 "zone_management": false, 00:06:58.022 "zone_append": false, 00:06:58.022 "compare": false, 00:06:58.022 "compare_and_write": false, 00:06:58.022 "abort": true, 00:06:58.022 "seek_hole": false, 00:06:58.022 "seek_data": false, 00:06:58.022 "copy": true, 00:06:58.022 "nvme_iov_md": false 00:06:58.022 }, 00:06:58.022 "memory_domains": [ 00:06:58.022 { 00:06:58.022 "dma_device_id": "system", 00:06:58.022 "dma_device_type": 1 00:06:58.022 }, 00:06:58.022 { 00:06:58.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.022 "dma_device_type": 2 00:06:58.022 } 00:06:58.022 ], 00:06:58.022 "driver_specific": {} 00:06:58.022 } 00:06:58.022 ] 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.022 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.022 "name": "Existed_Raid", 00:06:58.022 "uuid": "ca604fbf-b734-4da6-8baa-6c0e858811e8", 00:06:58.022 "strip_size_kb": 64, 00:06:58.022 "state": "online", 00:06:58.022 "raid_level": "raid0", 00:06:58.022 "superblock": false, 00:06:58.022 "num_base_bdevs": 2, 00:06:58.022 "num_base_bdevs_discovered": 2, 00:06:58.022 "num_base_bdevs_operational": 2, 00:06:58.023 "base_bdevs_list": [ 00:06:58.023 { 00:06:58.023 "name": "BaseBdev1", 00:06:58.023 "uuid": "1a634937-4992-4db1-b1b6-1752645ccd20", 00:06:58.023 "is_configured": true, 00:06:58.023 "data_offset": 0, 00:06:58.023 "data_size": 65536 00:06:58.023 }, 00:06:58.023 { 00:06:58.023 "name": "BaseBdev2", 00:06:58.023 "uuid": "f9614e00-1dab-4b3a-8af0-c4be26450fc4", 00:06:58.023 "is_configured": true, 00:06:58.023 "data_offset": 0, 00:06:58.023 "data_size": 65536 00:06:58.023 } 00:06:58.023 ] 00:06:58.023 }' 00:06:58.023 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.023 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:58.591 [2024-11-27 11:45:24.722855] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:58.591 "name": "Existed_Raid", 00:06:58.591 "aliases": [ 00:06:58.591 "ca604fbf-b734-4da6-8baa-6c0e858811e8" 00:06:58.591 ], 00:06:58.591 "product_name": "Raid Volume", 00:06:58.591 "block_size": 512, 00:06:58.591 "num_blocks": 131072, 00:06:58.591 "uuid": "ca604fbf-b734-4da6-8baa-6c0e858811e8", 00:06:58.591 "assigned_rate_limits": { 00:06:58.591 "rw_ios_per_sec": 0, 00:06:58.591 "rw_mbytes_per_sec": 0, 00:06:58.591 "r_mbytes_per_sec": 0, 00:06:58.591 "w_mbytes_per_sec": 0 00:06:58.591 }, 00:06:58.591 "claimed": false, 00:06:58.591 "zoned": false, 00:06:58.591 "supported_io_types": { 00:06:58.591 "read": true, 00:06:58.591 "write": true, 00:06:58.591 "unmap": true, 00:06:58.591 "flush": true, 00:06:58.591 "reset": true, 00:06:58.591 "nvme_admin": false, 00:06:58.591 "nvme_io": false, 00:06:58.591 "nvme_io_md": false, 00:06:58.591 "write_zeroes": true, 00:06:58.591 "zcopy": false, 00:06:58.591 "get_zone_info": false, 00:06:58.591 "zone_management": false, 00:06:58.591 "zone_append": false, 00:06:58.591 "compare": false, 00:06:58.591 "compare_and_write": false, 00:06:58.591 "abort": false, 00:06:58.591 "seek_hole": false, 00:06:58.591 "seek_data": false, 00:06:58.591 "copy": false, 00:06:58.591 "nvme_iov_md": false 00:06:58.591 }, 00:06:58.591 "memory_domains": [ 00:06:58.591 { 00:06:58.591 "dma_device_id": "system", 00:06:58.591 "dma_device_type": 1 00:06:58.591 }, 00:06:58.591 { 00:06:58.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.591 "dma_device_type": 2 00:06:58.591 }, 00:06:58.591 { 00:06:58.591 "dma_device_id": "system", 00:06:58.591 "dma_device_type": 1 00:06:58.591 }, 00:06:58.591 { 00:06:58.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.591 "dma_device_type": 2 00:06:58.591 } 00:06:58.591 ], 00:06:58.591 "driver_specific": { 00:06:58.591 "raid": { 00:06:58.591 "uuid": "ca604fbf-b734-4da6-8baa-6c0e858811e8", 00:06:58.591 "strip_size_kb": 64, 00:06:58.591 "state": "online", 00:06:58.591 "raid_level": "raid0", 00:06:58.591 "superblock": false, 00:06:58.591 "num_base_bdevs": 2, 00:06:58.591 "num_base_bdevs_discovered": 2, 00:06:58.591 "num_base_bdevs_operational": 2, 00:06:58.591 "base_bdevs_list": [ 00:06:58.591 { 00:06:58.591 "name": "BaseBdev1", 00:06:58.591 "uuid": "1a634937-4992-4db1-b1b6-1752645ccd20", 00:06:58.591 "is_configured": true, 00:06:58.591 "data_offset": 0, 00:06:58.591 "data_size": 65536 00:06:58.591 }, 00:06:58.591 { 00:06:58.591 "name": "BaseBdev2", 00:06:58.591 "uuid": "f9614e00-1dab-4b3a-8af0-c4be26450fc4", 00:06:58.591 "is_configured": true, 00:06:58.591 "data_offset": 0, 00:06:58.591 "data_size": 65536 00:06:58.591 } 00:06:58.591 ] 00:06:58.591 } 00:06:58.591 } 00:06:58.591 }' 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:58.591 BaseBdev2' 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.591 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.592 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.592 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:58.592 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:58.592 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:58.592 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:58.592 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.592 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.592 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.592 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.592 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:58.592 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:58.592 11:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:58.592 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.592 11:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.592 [2024-11-27 11:45:24.922270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:58.592 [2024-11-27 11:45:24.922305] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:58.592 [2024-11-27 11:45:24.922357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.851 "name": "Existed_Raid", 00:06:58.851 "uuid": "ca604fbf-b734-4da6-8baa-6c0e858811e8", 00:06:58.851 "strip_size_kb": 64, 00:06:58.851 "state": "offline", 00:06:58.851 "raid_level": "raid0", 00:06:58.851 "superblock": false, 00:06:58.851 "num_base_bdevs": 2, 00:06:58.851 "num_base_bdevs_discovered": 1, 00:06:58.851 "num_base_bdevs_operational": 1, 00:06:58.851 "base_bdevs_list": [ 00:06:58.851 { 00:06:58.851 "name": null, 00:06:58.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.851 "is_configured": false, 00:06:58.851 "data_offset": 0, 00:06:58.851 "data_size": 65536 00:06:58.851 }, 00:06:58.851 { 00:06:58.851 "name": "BaseBdev2", 00:06:58.851 "uuid": "f9614e00-1dab-4b3a-8af0-c4be26450fc4", 00:06:58.851 "is_configured": true, 00:06:58.851 "data_offset": 0, 00:06:58.851 "data_size": 65536 00:06:58.851 } 00:06:58.851 ] 00:06:58.851 }' 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.851 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.111 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:59.111 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:59.111 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.111 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.111 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.111 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.371 [2024-11-27 11:45:25.538791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:59.371 [2024-11-27 11:45:25.538870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60666 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60666 ']' 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60666 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60666 00:06:59.371 killing process with pid 60666 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60666' 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60666 00:06:59.371 [2024-11-27 11:45:25.720115] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:59.371 11:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60666 00:06:59.371 [2024-11-27 11:45:25.737060] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:00.750 11:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:00.750 00:07:00.750 real 0m4.998s 00:07:00.750 user 0m7.158s 00:07:00.750 sys 0m0.808s 00:07:00.750 ************************************ 00:07:00.750 END TEST raid_state_function_test 00:07:00.750 ************************************ 00:07:00.750 11:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.750 11:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.751 11:45:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:00.751 11:45:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:00.751 11:45:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.751 11:45:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:00.751 ************************************ 00:07:00.751 START TEST raid_state_function_test_sb 00:07:00.751 ************************************ 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60910 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60910' 00:07:00.751 Process raid pid: 60910 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60910 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60910 ']' 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.751 11:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.751 [2024-11-27 11:45:27.041754] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:00.751 [2024-11-27 11:45:27.041882] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.010 [2024-11-27 11:45:27.215260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.010 [2024-11-27 11:45:27.330789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.270 [2024-11-27 11:45:27.537803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.270 [2024-11-27 11:45:27.537864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.530 [2024-11-27 11:45:27.879374] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:01.530 [2024-11-27 11:45:27.879427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:01.530 [2024-11-27 11:45:27.879438] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:01.530 [2024-11-27 11:45:27.879448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.530 11:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.790 11:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.790 "name": "Existed_Raid", 00:07:01.790 "uuid": "b1edb2c1-ff83-430d-be9b-d9adfae47108", 00:07:01.790 "strip_size_kb": 64, 00:07:01.790 "state": "configuring", 00:07:01.790 "raid_level": "raid0", 00:07:01.790 "superblock": true, 00:07:01.790 "num_base_bdevs": 2, 00:07:01.790 "num_base_bdevs_discovered": 0, 00:07:01.790 "num_base_bdevs_operational": 2, 00:07:01.790 "base_bdevs_list": [ 00:07:01.790 { 00:07:01.790 "name": "BaseBdev1", 00:07:01.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.790 "is_configured": false, 00:07:01.790 "data_offset": 0, 00:07:01.790 "data_size": 0 00:07:01.790 }, 00:07:01.790 { 00:07:01.790 "name": "BaseBdev2", 00:07:01.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.790 "is_configured": false, 00:07:01.790 "data_offset": 0, 00:07:01.790 "data_size": 0 00:07:01.790 } 00:07:01.790 ] 00:07:01.790 }' 00:07:01.790 11:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.790 11:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.050 [2024-11-27 11:45:28.322554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:02.050 [2024-11-27 11:45:28.322638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.050 [2024-11-27 11:45:28.334529] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:02.050 [2024-11-27 11:45:28.334609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:02.050 [2024-11-27 11:45:28.334638] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:02.050 [2024-11-27 11:45:28.334668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.050 [2024-11-27 11:45:28.381947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:02.050 BaseBdev1 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.050 [ 00:07:02.050 { 00:07:02.050 "name": "BaseBdev1", 00:07:02.050 "aliases": [ 00:07:02.050 "1429bfd1-b21c-4c95-bf62-53061d02be1b" 00:07:02.050 ], 00:07:02.050 "product_name": "Malloc disk", 00:07:02.050 "block_size": 512, 00:07:02.050 "num_blocks": 65536, 00:07:02.050 "uuid": "1429bfd1-b21c-4c95-bf62-53061d02be1b", 00:07:02.050 "assigned_rate_limits": { 00:07:02.050 "rw_ios_per_sec": 0, 00:07:02.050 "rw_mbytes_per_sec": 0, 00:07:02.050 "r_mbytes_per_sec": 0, 00:07:02.050 "w_mbytes_per_sec": 0 00:07:02.050 }, 00:07:02.050 "claimed": true, 00:07:02.050 "claim_type": "exclusive_write", 00:07:02.050 "zoned": false, 00:07:02.050 "supported_io_types": { 00:07:02.050 "read": true, 00:07:02.050 "write": true, 00:07:02.050 "unmap": true, 00:07:02.050 "flush": true, 00:07:02.050 "reset": true, 00:07:02.050 "nvme_admin": false, 00:07:02.050 "nvme_io": false, 00:07:02.050 "nvme_io_md": false, 00:07:02.050 "write_zeroes": true, 00:07:02.050 "zcopy": true, 00:07:02.050 "get_zone_info": false, 00:07:02.050 "zone_management": false, 00:07:02.050 "zone_append": false, 00:07:02.050 "compare": false, 00:07:02.050 "compare_and_write": false, 00:07:02.050 "abort": true, 00:07:02.050 "seek_hole": false, 00:07:02.050 "seek_data": false, 00:07:02.050 "copy": true, 00:07:02.050 "nvme_iov_md": false 00:07:02.050 }, 00:07:02.050 "memory_domains": [ 00:07:02.050 { 00:07:02.050 "dma_device_id": "system", 00:07:02.050 "dma_device_type": 1 00:07:02.050 }, 00:07:02.050 { 00:07:02.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.050 "dma_device_type": 2 00:07:02.050 } 00:07:02.050 ], 00:07:02.050 "driver_specific": {} 00:07:02.050 } 00:07:02.050 ] 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.050 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.309 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.309 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.309 "name": "Existed_Raid", 00:07:02.309 "uuid": "7c9b9bb3-0757-4b2f-8c4d-db08fc0a574d", 00:07:02.309 "strip_size_kb": 64, 00:07:02.309 "state": "configuring", 00:07:02.309 "raid_level": "raid0", 00:07:02.309 "superblock": true, 00:07:02.309 "num_base_bdevs": 2, 00:07:02.309 "num_base_bdevs_discovered": 1, 00:07:02.309 "num_base_bdevs_operational": 2, 00:07:02.309 "base_bdevs_list": [ 00:07:02.309 { 00:07:02.309 "name": "BaseBdev1", 00:07:02.309 "uuid": "1429bfd1-b21c-4c95-bf62-53061d02be1b", 00:07:02.309 "is_configured": true, 00:07:02.309 "data_offset": 2048, 00:07:02.309 "data_size": 63488 00:07:02.309 }, 00:07:02.309 { 00:07:02.309 "name": "BaseBdev2", 00:07:02.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.309 "is_configured": false, 00:07:02.309 "data_offset": 0, 00:07:02.309 "data_size": 0 00:07:02.309 } 00:07:02.309 ] 00:07:02.310 }' 00:07:02.310 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.310 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.574 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.575 [2024-11-27 11:45:28.837192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:02.575 [2024-11-27 11:45:28.837255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.575 [2024-11-27 11:45:28.845222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:02.575 [2024-11-27 11:45:28.846999] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:02.575 [2024-11-27 11:45:28.847043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.575 "name": "Existed_Raid", 00:07:02.575 "uuid": "d0288c6b-5837-47f1-b96b-1d9e984ed65a", 00:07:02.575 "strip_size_kb": 64, 00:07:02.575 "state": "configuring", 00:07:02.575 "raid_level": "raid0", 00:07:02.575 "superblock": true, 00:07:02.575 "num_base_bdevs": 2, 00:07:02.575 "num_base_bdevs_discovered": 1, 00:07:02.575 "num_base_bdevs_operational": 2, 00:07:02.575 "base_bdevs_list": [ 00:07:02.575 { 00:07:02.575 "name": "BaseBdev1", 00:07:02.575 "uuid": "1429bfd1-b21c-4c95-bf62-53061d02be1b", 00:07:02.575 "is_configured": true, 00:07:02.575 "data_offset": 2048, 00:07:02.575 "data_size": 63488 00:07:02.575 }, 00:07:02.575 { 00:07:02.575 "name": "BaseBdev2", 00:07:02.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.575 "is_configured": false, 00:07:02.575 "data_offset": 0, 00:07:02.575 "data_size": 0 00:07:02.575 } 00:07:02.575 ] 00:07:02.575 }' 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.575 11:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.189 [2024-11-27 11:45:29.316169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:03.189 [2024-11-27 11:45:29.316559] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:03.189 [2024-11-27 11:45:29.316627] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:03.189 [2024-11-27 11:45:29.317000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:03.189 BaseBdev2 00:07:03.189 [2024-11-27 11:45:29.317231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:03.189 [2024-11-27 11:45:29.317315] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:03.189 [2024-11-27 11:45:29.317527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.189 [ 00:07:03.189 { 00:07:03.189 "name": "BaseBdev2", 00:07:03.189 "aliases": [ 00:07:03.189 "4d36afe4-da15-4d7a-8062-f1b141a9ec2b" 00:07:03.189 ], 00:07:03.189 "product_name": "Malloc disk", 00:07:03.189 "block_size": 512, 00:07:03.189 "num_blocks": 65536, 00:07:03.189 "uuid": "4d36afe4-da15-4d7a-8062-f1b141a9ec2b", 00:07:03.189 "assigned_rate_limits": { 00:07:03.189 "rw_ios_per_sec": 0, 00:07:03.189 "rw_mbytes_per_sec": 0, 00:07:03.189 "r_mbytes_per_sec": 0, 00:07:03.189 "w_mbytes_per_sec": 0 00:07:03.189 }, 00:07:03.189 "claimed": true, 00:07:03.189 "claim_type": "exclusive_write", 00:07:03.189 "zoned": false, 00:07:03.189 "supported_io_types": { 00:07:03.189 "read": true, 00:07:03.189 "write": true, 00:07:03.189 "unmap": true, 00:07:03.189 "flush": true, 00:07:03.189 "reset": true, 00:07:03.189 "nvme_admin": false, 00:07:03.189 "nvme_io": false, 00:07:03.189 "nvme_io_md": false, 00:07:03.189 "write_zeroes": true, 00:07:03.189 "zcopy": true, 00:07:03.189 "get_zone_info": false, 00:07:03.189 "zone_management": false, 00:07:03.189 "zone_append": false, 00:07:03.189 "compare": false, 00:07:03.189 "compare_and_write": false, 00:07:03.189 "abort": true, 00:07:03.189 "seek_hole": false, 00:07:03.189 "seek_data": false, 00:07:03.189 "copy": true, 00:07:03.189 "nvme_iov_md": false 00:07:03.189 }, 00:07:03.189 "memory_domains": [ 00:07:03.189 { 00:07:03.189 "dma_device_id": "system", 00:07:03.189 "dma_device_type": 1 00:07:03.189 }, 00:07:03.189 { 00:07:03.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.189 "dma_device_type": 2 00:07:03.189 } 00:07:03.189 ], 00:07:03.189 "driver_specific": {} 00:07:03.189 } 00:07:03.189 ] 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.189 "name": "Existed_Raid", 00:07:03.189 "uuid": "d0288c6b-5837-47f1-b96b-1d9e984ed65a", 00:07:03.189 "strip_size_kb": 64, 00:07:03.189 "state": "online", 00:07:03.189 "raid_level": "raid0", 00:07:03.189 "superblock": true, 00:07:03.189 "num_base_bdevs": 2, 00:07:03.189 "num_base_bdevs_discovered": 2, 00:07:03.189 "num_base_bdevs_operational": 2, 00:07:03.189 "base_bdevs_list": [ 00:07:03.189 { 00:07:03.189 "name": "BaseBdev1", 00:07:03.189 "uuid": "1429bfd1-b21c-4c95-bf62-53061d02be1b", 00:07:03.189 "is_configured": true, 00:07:03.189 "data_offset": 2048, 00:07:03.189 "data_size": 63488 00:07:03.189 }, 00:07:03.189 { 00:07:03.189 "name": "BaseBdev2", 00:07:03.189 "uuid": "4d36afe4-da15-4d7a-8062-f1b141a9ec2b", 00:07:03.189 "is_configured": true, 00:07:03.189 "data_offset": 2048, 00:07:03.189 "data_size": 63488 00:07:03.189 } 00:07:03.189 ] 00:07:03.189 }' 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.189 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.448 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:03.448 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:03.448 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:03.448 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:03.448 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:03.448 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:03.448 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:03.448 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.448 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.448 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:03.448 [2024-11-27 11:45:29.755884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.448 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.448 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:03.448 "name": "Existed_Raid", 00:07:03.448 "aliases": [ 00:07:03.448 "d0288c6b-5837-47f1-b96b-1d9e984ed65a" 00:07:03.448 ], 00:07:03.448 "product_name": "Raid Volume", 00:07:03.448 "block_size": 512, 00:07:03.448 "num_blocks": 126976, 00:07:03.448 "uuid": "d0288c6b-5837-47f1-b96b-1d9e984ed65a", 00:07:03.448 "assigned_rate_limits": { 00:07:03.448 "rw_ios_per_sec": 0, 00:07:03.448 "rw_mbytes_per_sec": 0, 00:07:03.448 "r_mbytes_per_sec": 0, 00:07:03.448 "w_mbytes_per_sec": 0 00:07:03.448 }, 00:07:03.448 "claimed": false, 00:07:03.448 "zoned": false, 00:07:03.448 "supported_io_types": { 00:07:03.448 "read": true, 00:07:03.448 "write": true, 00:07:03.448 "unmap": true, 00:07:03.448 "flush": true, 00:07:03.448 "reset": true, 00:07:03.448 "nvme_admin": false, 00:07:03.448 "nvme_io": false, 00:07:03.448 "nvme_io_md": false, 00:07:03.448 "write_zeroes": true, 00:07:03.448 "zcopy": false, 00:07:03.448 "get_zone_info": false, 00:07:03.448 "zone_management": false, 00:07:03.448 "zone_append": false, 00:07:03.448 "compare": false, 00:07:03.448 "compare_and_write": false, 00:07:03.448 "abort": false, 00:07:03.448 "seek_hole": false, 00:07:03.448 "seek_data": false, 00:07:03.448 "copy": false, 00:07:03.448 "nvme_iov_md": false 00:07:03.448 }, 00:07:03.448 "memory_domains": [ 00:07:03.448 { 00:07:03.448 "dma_device_id": "system", 00:07:03.448 "dma_device_type": 1 00:07:03.448 }, 00:07:03.448 { 00:07:03.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.448 "dma_device_type": 2 00:07:03.448 }, 00:07:03.448 { 00:07:03.448 "dma_device_id": "system", 00:07:03.448 "dma_device_type": 1 00:07:03.448 }, 00:07:03.448 { 00:07:03.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.449 "dma_device_type": 2 00:07:03.449 } 00:07:03.449 ], 00:07:03.449 "driver_specific": { 00:07:03.449 "raid": { 00:07:03.449 "uuid": "d0288c6b-5837-47f1-b96b-1d9e984ed65a", 00:07:03.449 "strip_size_kb": 64, 00:07:03.449 "state": "online", 00:07:03.449 "raid_level": "raid0", 00:07:03.449 "superblock": true, 00:07:03.449 "num_base_bdevs": 2, 00:07:03.449 "num_base_bdevs_discovered": 2, 00:07:03.449 "num_base_bdevs_operational": 2, 00:07:03.449 "base_bdevs_list": [ 00:07:03.449 { 00:07:03.449 "name": "BaseBdev1", 00:07:03.449 "uuid": "1429bfd1-b21c-4c95-bf62-53061d02be1b", 00:07:03.449 "is_configured": true, 00:07:03.449 "data_offset": 2048, 00:07:03.449 "data_size": 63488 00:07:03.449 }, 00:07:03.449 { 00:07:03.449 "name": "BaseBdev2", 00:07:03.449 "uuid": "4d36afe4-da15-4d7a-8062-f1b141a9ec2b", 00:07:03.449 "is_configured": true, 00:07:03.449 "data_offset": 2048, 00:07:03.449 "data_size": 63488 00:07:03.449 } 00:07:03.449 ] 00:07:03.449 } 00:07:03.449 } 00:07:03.449 }' 00:07:03.449 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:03.707 BaseBdev2' 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.707 11:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.707 [2024-11-27 11:45:29.995196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:03.707 [2024-11-27 11:45:29.995230] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:03.707 [2024-11-27 11:45:29.995279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.966 "name": "Existed_Raid", 00:07:03.966 "uuid": "d0288c6b-5837-47f1-b96b-1d9e984ed65a", 00:07:03.966 "strip_size_kb": 64, 00:07:03.966 "state": "offline", 00:07:03.966 "raid_level": "raid0", 00:07:03.966 "superblock": true, 00:07:03.966 "num_base_bdevs": 2, 00:07:03.966 "num_base_bdevs_discovered": 1, 00:07:03.966 "num_base_bdevs_operational": 1, 00:07:03.966 "base_bdevs_list": [ 00:07:03.966 { 00:07:03.966 "name": null, 00:07:03.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.966 "is_configured": false, 00:07:03.966 "data_offset": 0, 00:07:03.966 "data_size": 63488 00:07:03.966 }, 00:07:03.966 { 00:07:03.966 "name": "BaseBdev2", 00:07:03.966 "uuid": "4d36afe4-da15-4d7a-8062-f1b141a9ec2b", 00:07:03.966 "is_configured": true, 00:07:03.966 "data_offset": 2048, 00:07:03.966 "data_size": 63488 00:07:03.966 } 00:07:03.966 ] 00:07:03.966 }' 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.966 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.225 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:04.225 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:04.225 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.225 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:04.225 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.225 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.225 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.225 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:04.225 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:04.225 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:04.225 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.225 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.225 [2024-11-27 11:45:30.572222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:04.225 [2024-11-27 11:45:30.572279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60910 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60910 ']' 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60910 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60910 00:07:04.484 killing process with pid 60910 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60910' 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60910 00:07:04.484 11:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60910 00:07:04.484 [2024-11-27 11:45:30.746042] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:04.484 [2024-11-27 11:45:30.763838] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:05.861 11:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:05.861 00:07:05.861 real 0m4.964s 00:07:05.861 user 0m7.120s 00:07:05.861 sys 0m0.791s 00:07:05.861 ************************************ 00:07:05.861 END TEST raid_state_function_test_sb 00:07:05.861 ************************************ 00:07:05.861 11:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.861 11:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.861 11:45:31 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:05.861 11:45:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:05.861 11:45:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.861 11:45:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:05.861 ************************************ 00:07:05.861 START TEST raid_superblock_test 00:07:05.861 ************************************ 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:05.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61162 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61162 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61162 ']' 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.861 11:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:05.861 [2024-11-27 11:45:32.052651] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:05.861 [2024-11-27 11:45:32.052894] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61162 ] 00:07:05.861 [2024-11-27 11:45:32.227774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.121 [2024-11-27 11:45:32.344122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.381 [2024-11-27 11:45:32.548061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.381 [2024-11-27 11:45:32.548150] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.643 malloc1 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.643 [2024-11-27 11:45:32.951949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:06.643 [2024-11-27 11:45:32.952082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:06.643 [2024-11-27 11:45:32.952138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:06.643 [2024-11-27 11:45:32.952205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:06.643 [2024-11-27 11:45:32.954514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:06.643 [2024-11-27 11:45:32.954606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:06.643 pt1 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.643 11:45:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.643 malloc2 00:07:06.643 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.643 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:06.643 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.644 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.644 [2024-11-27 11:45:33.009695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:06.644 [2024-11-27 11:45:33.009755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:06.644 [2024-11-27 11:45:33.009780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:06.644 [2024-11-27 11:45:33.009789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:06.644 [2024-11-27 11:45:33.012080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:06.644 [2024-11-27 11:45:33.012117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:06.644 pt2 00:07:06.644 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.644 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:06.644 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:06.644 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:06.644 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.644 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.644 [2024-11-27 11:45:33.017737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:06.644 [2024-11-27 11:45:33.019555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:06.644 [2024-11-27 11:45:33.019824] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:06.644 [2024-11-27 11:45:33.019846] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:06.644 [2024-11-27 11:45:33.020138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:06.644 [2024-11-27 11:45:33.020305] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:06.644 [2024-11-27 11:45:33.020318] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:06.644 [2024-11-27 11:45:33.020493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.644 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.644 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:06.644 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:06.644 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:06.644 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:06.644 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.644 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.644 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.644 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.644 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.644 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.908 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.908 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.908 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.908 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:06.908 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.908 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.908 "name": "raid_bdev1", 00:07:06.908 "uuid": "4f6efa89-df87-4445-a281-6d86c64f08e6", 00:07:06.908 "strip_size_kb": 64, 00:07:06.908 "state": "online", 00:07:06.908 "raid_level": "raid0", 00:07:06.908 "superblock": true, 00:07:06.908 "num_base_bdevs": 2, 00:07:06.908 "num_base_bdevs_discovered": 2, 00:07:06.908 "num_base_bdevs_operational": 2, 00:07:06.908 "base_bdevs_list": [ 00:07:06.908 { 00:07:06.908 "name": "pt1", 00:07:06.908 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:06.908 "is_configured": true, 00:07:06.908 "data_offset": 2048, 00:07:06.908 "data_size": 63488 00:07:06.908 }, 00:07:06.908 { 00:07:06.908 "name": "pt2", 00:07:06.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:06.908 "is_configured": true, 00:07:06.908 "data_offset": 2048, 00:07:06.908 "data_size": 63488 00:07:06.908 } 00:07:06.908 ] 00:07:06.908 }' 00:07:06.908 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.908 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.168 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:07.169 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:07.169 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:07.169 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:07.169 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:07.169 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:07.169 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:07.169 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:07.169 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.169 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.169 [2024-11-27 11:45:33.449315] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.169 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.169 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:07.169 "name": "raid_bdev1", 00:07:07.169 "aliases": [ 00:07:07.169 "4f6efa89-df87-4445-a281-6d86c64f08e6" 00:07:07.169 ], 00:07:07.169 "product_name": "Raid Volume", 00:07:07.169 "block_size": 512, 00:07:07.169 "num_blocks": 126976, 00:07:07.169 "uuid": "4f6efa89-df87-4445-a281-6d86c64f08e6", 00:07:07.169 "assigned_rate_limits": { 00:07:07.169 "rw_ios_per_sec": 0, 00:07:07.169 "rw_mbytes_per_sec": 0, 00:07:07.169 "r_mbytes_per_sec": 0, 00:07:07.169 "w_mbytes_per_sec": 0 00:07:07.169 }, 00:07:07.169 "claimed": false, 00:07:07.169 "zoned": false, 00:07:07.169 "supported_io_types": { 00:07:07.169 "read": true, 00:07:07.169 "write": true, 00:07:07.169 "unmap": true, 00:07:07.169 "flush": true, 00:07:07.169 "reset": true, 00:07:07.169 "nvme_admin": false, 00:07:07.169 "nvme_io": false, 00:07:07.169 "nvme_io_md": false, 00:07:07.169 "write_zeroes": true, 00:07:07.169 "zcopy": false, 00:07:07.169 "get_zone_info": false, 00:07:07.169 "zone_management": false, 00:07:07.169 "zone_append": false, 00:07:07.169 "compare": false, 00:07:07.169 "compare_and_write": false, 00:07:07.169 "abort": false, 00:07:07.169 "seek_hole": false, 00:07:07.169 "seek_data": false, 00:07:07.169 "copy": false, 00:07:07.169 "nvme_iov_md": false 00:07:07.169 }, 00:07:07.169 "memory_domains": [ 00:07:07.169 { 00:07:07.169 "dma_device_id": "system", 00:07:07.169 "dma_device_type": 1 00:07:07.169 }, 00:07:07.169 { 00:07:07.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.169 "dma_device_type": 2 00:07:07.169 }, 00:07:07.169 { 00:07:07.169 "dma_device_id": "system", 00:07:07.169 "dma_device_type": 1 00:07:07.169 }, 00:07:07.169 { 00:07:07.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.169 "dma_device_type": 2 00:07:07.169 } 00:07:07.169 ], 00:07:07.169 "driver_specific": { 00:07:07.169 "raid": { 00:07:07.169 "uuid": "4f6efa89-df87-4445-a281-6d86c64f08e6", 00:07:07.169 "strip_size_kb": 64, 00:07:07.169 "state": "online", 00:07:07.169 "raid_level": "raid0", 00:07:07.169 "superblock": true, 00:07:07.169 "num_base_bdevs": 2, 00:07:07.169 "num_base_bdevs_discovered": 2, 00:07:07.169 "num_base_bdevs_operational": 2, 00:07:07.169 "base_bdevs_list": [ 00:07:07.169 { 00:07:07.169 "name": "pt1", 00:07:07.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:07.169 "is_configured": true, 00:07:07.169 "data_offset": 2048, 00:07:07.169 "data_size": 63488 00:07:07.169 }, 00:07:07.169 { 00:07:07.169 "name": "pt2", 00:07:07.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:07.169 "is_configured": true, 00:07:07.169 "data_offset": 2048, 00:07:07.169 "data_size": 63488 00:07:07.169 } 00:07:07.169 ] 00:07:07.169 } 00:07:07.169 } 00:07:07.169 }' 00:07:07.169 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:07.169 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:07.169 pt2' 00:07:07.169 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.428 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:07.428 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:07.428 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:07.428 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.428 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.428 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.428 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.428 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:07.428 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:07.429 [2024-11-27 11:45:33.692907] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4f6efa89-df87-4445-a281-6d86c64f08e6 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4f6efa89-df87-4445-a281-6d86c64f08e6 ']' 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.429 [2024-11-27 11:45:33.736470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:07.429 [2024-11-27 11:45:33.736499] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:07.429 [2024-11-27 11:45:33.736599] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.429 [2024-11-27 11:45:33.736651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:07.429 [2024-11-27 11:45:33.736665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.429 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.699 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.699 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:07.699 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:07.699 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:07.699 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:07.699 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:07.699 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.699 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:07.699 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.699 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.700 [2024-11-27 11:45:33.856335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:07.700 [2024-11-27 11:45:33.858404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:07.700 [2024-11-27 11:45:33.858551] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:07.700 [2024-11-27 11:45:33.858616] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:07.700 [2024-11-27 11:45:33.858632] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:07.700 [2024-11-27 11:45:33.858647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:07.700 request: 00:07:07.700 { 00:07:07.700 "name": "raid_bdev1", 00:07:07.700 "raid_level": "raid0", 00:07:07.700 "base_bdevs": [ 00:07:07.700 "malloc1", 00:07:07.700 "malloc2" 00:07:07.700 ], 00:07:07.700 "strip_size_kb": 64, 00:07:07.700 "superblock": false, 00:07:07.700 "method": "bdev_raid_create", 00:07:07.700 "req_id": 1 00:07:07.700 } 00:07:07.700 Got JSON-RPC error response 00:07:07.700 response: 00:07:07.700 { 00:07:07.700 "code": -17, 00:07:07.700 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:07.700 } 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.700 [2024-11-27 11:45:33.916196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:07.700 [2024-11-27 11:45:33.916309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.700 [2024-11-27 11:45:33.916346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:07.700 [2024-11-27 11:45:33.916381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.700 [2024-11-27 11:45:33.918673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.700 [2024-11-27 11:45:33.918762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:07.700 [2024-11-27 11:45:33.918910] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:07.700 [2024-11-27 11:45:33.919027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:07.700 pt1 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.700 "name": "raid_bdev1", 00:07:07.700 "uuid": "4f6efa89-df87-4445-a281-6d86c64f08e6", 00:07:07.700 "strip_size_kb": 64, 00:07:07.700 "state": "configuring", 00:07:07.700 "raid_level": "raid0", 00:07:07.700 "superblock": true, 00:07:07.700 "num_base_bdevs": 2, 00:07:07.700 "num_base_bdevs_discovered": 1, 00:07:07.700 "num_base_bdevs_operational": 2, 00:07:07.700 "base_bdevs_list": [ 00:07:07.700 { 00:07:07.700 "name": "pt1", 00:07:07.700 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:07.700 "is_configured": true, 00:07:07.700 "data_offset": 2048, 00:07:07.700 "data_size": 63488 00:07:07.700 }, 00:07:07.700 { 00:07:07.700 "name": null, 00:07:07.700 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:07.700 "is_configured": false, 00:07:07.700 "data_offset": 2048, 00:07:07.700 "data_size": 63488 00:07:07.700 } 00:07:07.700 ] 00:07:07.700 }' 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.700 11:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.269 [2024-11-27 11:45:34.391492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:08.269 [2024-11-27 11:45:34.391652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.269 [2024-11-27 11:45:34.391683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:08.269 [2024-11-27 11:45:34.391696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.269 [2024-11-27 11:45:34.392230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.269 [2024-11-27 11:45:34.392264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:08.269 [2024-11-27 11:45:34.392359] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:08.269 [2024-11-27 11:45:34.392389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:08.269 [2024-11-27 11:45:34.392514] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:08.269 [2024-11-27 11:45:34.392536] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:08.269 [2024-11-27 11:45:34.392805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:08.269 [2024-11-27 11:45:34.392983] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:08.269 [2024-11-27 11:45:34.392994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:08.269 [2024-11-27 11:45:34.393145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.269 pt2 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.269 "name": "raid_bdev1", 00:07:08.269 "uuid": "4f6efa89-df87-4445-a281-6d86c64f08e6", 00:07:08.269 "strip_size_kb": 64, 00:07:08.269 "state": "online", 00:07:08.269 "raid_level": "raid0", 00:07:08.269 "superblock": true, 00:07:08.269 "num_base_bdevs": 2, 00:07:08.269 "num_base_bdevs_discovered": 2, 00:07:08.269 "num_base_bdevs_operational": 2, 00:07:08.269 "base_bdevs_list": [ 00:07:08.269 { 00:07:08.269 "name": "pt1", 00:07:08.269 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:08.269 "is_configured": true, 00:07:08.269 "data_offset": 2048, 00:07:08.269 "data_size": 63488 00:07:08.269 }, 00:07:08.269 { 00:07:08.269 "name": "pt2", 00:07:08.269 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:08.269 "is_configured": true, 00:07:08.269 "data_offset": 2048, 00:07:08.269 "data_size": 63488 00:07:08.269 } 00:07:08.269 ] 00:07:08.269 }' 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.269 11:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.529 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:08.529 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:08.529 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:08.529 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:08.529 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:08.529 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:08.529 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:08.529 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:08.529 11:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.529 11:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.529 [2024-11-27 11:45:34.858949] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.529 11:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.529 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:08.529 "name": "raid_bdev1", 00:07:08.529 "aliases": [ 00:07:08.529 "4f6efa89-df87-4445-a281-6d86c64f08e6" 00:07:08.529 ], 00:07:08.529 "product_name": "Raid Volume", 00:07:08.529 "block_size": 512, 00:07:08.529 "num_blocks": 126976, 00:07:08.529 "uuid": "4f6efa89-df87-4445-a281-6d86c64f08e6", 00:07:08.529 "assigned_rate_limits": { 00:07:08.529 "rw_ios_per_sec": 0, 00:07:08.529 "rw_mbytes_per_sec": 0, 00:07:08.529 "r_mbytes_per_sec": 0, 00:07:08.529 "w_mbytes_per_sec": 0 00:07:08.529 }, 00:07:08.529 "claimed": false, 00:07:08.529 "zoned": false, 00:07:08.529 "supported_io_types": { 00:07:08.529 "read": true, 00:07:08.529 "write": true, 00:07:08.529 "unmap": true, 00:07:08.529 "flush": true, 00:07:08.529 "reset": true, 00:07:08.529 "nvme_admin": false, 00:07:08.529 "nvme_io": false, 00:07:08.529 "nvme_io_md": false, 00:07:08.529 "write_zeroes": true, 00:07:08.529 "zcopy": false, 00:07:08.529 "get_zone_info": false, 00:07:08.529 "zone_management": false, 00:07:08.529 "zone_append": false, 00:07:08.529 "compare": false, 00:07:08.529 "compare_and_write": false, 00:07:08.529 "abort": false, 00:07:08.529 "seek_hole": false, 00:07:08.529 "seek_data": false, 00:07:08.529 "copy": false, 00:07:08.529 "nvme_iov_md": false 00:07:08.529 }, 00:07:08.529 "memory_domains": [ 00:07:08.529 { 00:07:08.529 "dma_device_id": "system", 00:07:08.529 "dma_device_type": 1 00:07:08.529 }, 00:07:08.529 { 00:07:08.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.529 "dma_device_type": 2 00:07:08.529 }, 00:07:08.529 { 00:07:08.529 "dma_device_id": "system", 00:07:08.529 "dma_device_type": 1 00:07:08.529 }, 00:07:08.529 { 00:07:08.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.529 "dma_device_type": 2 00:07:08.529 } 00:07:08.529 ], 00:07:08.529 "driver_specific": { 00:07:08.529 "raid": { 00:07:08.529 "uuid": "4f6efa89-df87-4445-a281-6d86c64f08e6", 00:07:08.529 "strip_size_kb": 64, 00:07:08.529 "state": "online", 00:07:08.529 "raid_level": "raid0", 00:07:08.529 "superblock": true, 00:07:08.529 "num_base_bdevs": 2, 00:07:08.529 "num_base_bdevs_discovered": 2, 00:07:08.529 "num_base_bdevs_operational": 2, 00:07:08.529 "base_bdevs_list": [ 00:07:08.529 { 00:07:08.529 "name": "pt1", 00:07:08.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:08.529 "is_configured": true, 00:07:08.529 "data_offset": 2048, 00:07:08.529 "data_size": 63488 00:07:08.529 }, 00:07:08.529 { 00:07:08.529 "name": "pt2", 00:07:08.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:08.529 "is_configured": true, 00:07:08.529 "data_offset": 2048, 00:07:08.529 "data_size": 63488 00:07:08.529 } 00:07:08.529 ] 00:07:08.529 } 00:07:08.529 } 00:07:08.529 }' 00:07:08.529 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:08.788 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:08.789 pt2' 00:07:08.789 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.789 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:08.789 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.789 11:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.789 [2024-11-27 11:45:35.110543] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4f6efa89-df87-4445-a281-6d86c64f08e6 '!=' 4f6efa89-df87-4445-a281-6d86c64f08e6 ']' 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61162 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61162 ']' 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61162 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.789 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61162 00:07:09.054 killing process with pid 61162 00:07:09.054 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.054 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.054 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61162' 00:07:09.054 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61162 00:07:09.054 11:45:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61162 00:07:09.054 [2024-11-27 11:45:35.187827] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:09.054 [2024-11-27 11:45:35.187955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.054 [2024-11-27 11:45:35.188014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:09.054 [2024-11-27 11:45:35.188028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:09.054 [2024-11-27 11:45:35.401803] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:10.434 11:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:10.434 00:07:10.434 real 0m4.572s 00:07:10.434 user 0m6.484s 00:07:10.434 sys 0m0.725s 00:07:10.434 11:45:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.434 11:45:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.434 ************************************ 00:07:10.434 END TEST raid_superblock_test 00:07:10.434 ************************************ 00:07:10.434 11:45:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:10.434 11:45:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:10.434 11:45:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.434 11:45:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:10.434 ************************************ 00:07:10.434 START TEST raid_read_error_test 00:07:10.434 ************************************ 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.p39XvUNRrz 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61374 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61374 00:07:10.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61374 ']' 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.434 11:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:10.434 [2024-11-27 11:45:36.707088] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:10.434 [2024-11-27 11:45:36.707283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61374 ] 00:07:10.694 [2024-11-27 11:45:36.880508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.694 [2024-11-27 11:45:37.000546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.952 [2024-11-27 11:45:37.204774] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.952 [2024-11-27 11:45:37.204821] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.211 BaseBdev1_malloc 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.211 true 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.211 [2024-11-27 11:45:37.587649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:11.211 [2024-11-27 11:45:37.587781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:11.211 [2024-11-27 11:45:37.587815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:11.211 [2024-11-27 11:45:37.587852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:11.211 [2024-11-27 11:45:37.590124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:11.211 [2024-11-27 11:45:37.590173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:11.211 BaseBdev1 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.211 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.470 BaseBdev2_malloc 00:07:11.470 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.470 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:11.470 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.470 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.470 true 00:07:11.470 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.470 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:11.470 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.470 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.470 [2024-11-27 11:45:37.643165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:11.470 [2024-11-27 11:45:37.643217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:11.470 [2024-11-27 11:45:37.643234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:11.470 [2024-11-27 11:45:37.643243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:11.470 [2024-11-27 11:45:37.645317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:11.470 [2024-11-27 11:45:37.645355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:11.470 BaseBdev2 00:07:11.470 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.470 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:11.470 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.470 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.470 [2024-11-27 11:45:37.651208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:11.471 [2024-11-27 11:45:37.653005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:11.471 [2024-11-27 11:45:37.653192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:11.471 [2024-11-27 11:45:37.653211] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:11.471 [2024-11-27 11:45:37.653432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:11.471 [2024-11-27 11:45:37.653605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:11.471 [2024-11-27 11:45:37.653619] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:11.471 [2024-11-27 11:45:37.653762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.471 "name": "raid_bdev1", 00:07:11.471 "uuid": "07dbbfe2-8e69-412d-b069-8ab5c9fdf568", 00:07:11.471 "strip_size_kb": 64, 00:07:11.471 "state": "online", 00:07:11.471 "raid_level": "raid0", 00:07:11.471 "superblock": true, 00:07:11.471 "num_base_bdevs": 2, 00:07:11.471 "num_base_bdevs_discovered": 2, 00:07:11.471 "num_base_bdevs_operational": 2, 00:07:11.471 "base_bdevs_list": [ 00:07:11.471 { 00:07:11.471 "name": "BaseBdev1", 00:07:11.471 "uuid": "d793e1af-18be-5ffa-be2b-3726851520f9", 00:07:11.471 "is_configured": true, 00:07:11.471 "data_offset": 2048, 00:07:11.471 "data_size": 63488 00:07:11.471 }, 00:07:11.471 { 00:07:11.471 "name": "BaseBdev2", 00:07:11.471 "uuid": "383f4834-494e-5a66-9d6d-7b9899f1c0dd", 00:07:11.471 "is_configured": true, 00:07:11.471 "data_offset": 2048, 00:07:11.471 "data_size": 63488 00:07:11.471 } 00:07:11.471 ] 00:07:11.471 }' 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.471 11:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.730 11:45:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:11.730 11:45:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:11.990 [2024-11-27 11:45:38.131700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.943 "name": "raid_bdev1", 00:07:12.943 "uuid": "07dbbfe2-8e69-412d-b069-8ab5c9fdf568", 00:07:12.943 "strip_size_kb": 64, 00:07:12.943 "state": "online", 00:07:12.943 "raid_level": "raid0", 00:07:12.943 "superblock": true, 00:07:12.943 "num_base_bdevs": 2, 00:07:12.943 "num_base_bdevs_discovered": 2, 00:07:12.943 "num_base_bdevs_operational": 2, 00:07:12.943 "base_bdevs_list": [ 00:07:12.943 { 00:07:12.943 "name": "BaseBdev1", 00:07:12.943 "uuid": "d793e1af-18be-5ffa-be2b-3726851520f9", 00:07:12.943 "is_configured": true, 00:07:12.943 "data_offset": 2048, 00:07:12.943 "data_size": 63488 00:07:12.943 }, 00:07:12.943 { 00:07:12.943 "name": "BaseBdev2", 00:07:12.943 "uuid": "383f4834-494e-5a66-9d6d-7b9899f1c0dd", 00:07:12.943 "is_configured": true, 00:07:12.943 "data_offset": 2048, 00:07:12.943 "data_size": 63488 00:07:12.943 } 00:07:12.943 ] 00:07:12.943 }' 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.943 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.203 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:13.203 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.203 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.203 [2024-11-27 11:45:39.486106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:13.203 [2024-11-27 11:45:39.486206] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:13.203 [2024-11-27 11:45:39.489062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.203 [2024-11-27 11:45:39.489142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:13.203 [2024-11-27 11:45:39.489190] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:13.203 [2024-11-27 11:45:39.489232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:13.203 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.203 11:45:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61374 00:07:13.203 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61374 ']' 00:07:13.203 { 00:07:13.203 "results": [ 00:07:13.203 { 00:07:13.203 "job": "raid_bdev1", 00:07:13.203 "core_mask": "0x1", 00:07:13.203 "workload": "randrw", 00:07:13.203 "percentage": 50, 00:07:13.203 "status": "finished", 00:07:13.203 "queue_depth": 1, 00:07:13.203 "io_size": 131072, 00:07:13.203 "runtime": 1.355391, 00:07:13.203 "iops": 16049.243354869554, 00:07:13.203 "mibps": 2006.1554193586942, 00:07:13.203 "io_failed": 1, 00:07:13.203 "io_timeout": 0, 00:07:13.203 "avg_latency_us": 86.22538853467897, 00:07:13.203 "min_latency_us": 26.047161572052403, 00:07:13.203 "max_latency_us": 1645.5545851528384 00:07:13.203 } 00:07:13.203 ], 00:07:13.203 "core_count": 1 00:07:13.203 } 00:07:13.203 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61374 00:07:13.203 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:13.203 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.203 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61374 00:07:13.203 killing process with pid 61374 00:07:13.203 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.203 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.203 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61374' 00:07:13.203 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61374 00:07:13.203 11:45:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61374 00:07:13.203 [2024-11-27 11:45:39.519161] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:13.461 [2024-11-27 11:45:39.653052] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.841 11:45:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.p39XvUNRrz 00:07:14.841 11:45:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:14.841 11:45:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:14.841 11:45:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:14.841 11:45:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:14.841 ************************************ 00:07:14.841 END TEST raid_read_error_test 00:07:14.841 ************************************ 00:07:14.841 11:45:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:14.841 11:45:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:14.841 11:45:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:14.841 00:07:14.841 real 0m4.241s 00:07:14.841 user 0m5.030s 00:07:14.841 sys 0m0.500s 00:07:14.841 11:45:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.841 11:45:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.841 11:45:40 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:14.841 11:45:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:14.841 11:45:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.841 11:45:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.841 ************************************ 00:07:14.841 START TEST raid_write_error_test 00:07:14.841 ************************************ 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Uh1K8Ns6Wz 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61514 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61514 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61514 ']' 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.841 11:45:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.842 [2024-11-27 11:45:41.017001] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:14.842 [2024-11-27 11:45:41.017203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61514 ] 00:07:14.842 [2024-11-27 11:45:41.192798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.100 [2024-11-27 11:45:41.314827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.359 [2024-11-27 11:45:41.518892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.359 [2024-11-27 11:45:41.519052] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.620 BaseBdev1_malloc 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.620 true 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.620 [2024-11-27 11:45:41.929160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:15.620 [2024-11-27 11:45:41.929229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.620 [2024-11-27 11:45:41.929252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:15.620 [2024-11-27 11:45:41.929263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.620 [2024-11-27 11:45:41.931478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.620 [2024-11-27 11:45:41.931525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:15.620 BaseBdev1 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.620 BaseBdev2_malloc 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.620 true 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.620 11:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.620 [2024-11-27 11:45:41.998991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:15.620 [2024-11-27 11:45:41.999058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.620 [2024-11-27 11:45:41.999082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:15.620 [2024-11-27 11:45:41.999093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.620 [2024-11-27 11:45:42.001561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.880 [2024-11-27 11:45:42.001669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:15.880 BaseBdev2 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.880 [2024-11-27 11:45:42.011057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.880 [2024-11-27 11:45:42.013129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:15.880 [2024-11-27 11:45:42.013338] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:15.880 [2024-11-27 11:45:42.013356] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:15.880 [2024-11-27 11:45:42.013635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:15.880 [2024-11-27 11:45:42.013812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:15.880 [2024-11-27 11:45:42.013825] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:15.880 [2024-11-27 11:45:42.014031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.880 "name": "raid_bdev1", 00:07:15.880 "uuid": "90b7b8c9-b0b5-4ffa-8c37-d3a5de76b028", 00:07:15.880 "strip_size_kb": 64, 00:07:15.880 "state": "online", 00:07:15.880 "raid_level": "raid0", 00:07:15.880 "superblock": true, 00:07:15.880 "num_base_bdevs": 2, 00:07:15.880 "num_base_bdevs_discovered": 2, 00:07:15.880 "num_base_bdevs_operational": 2, 00:07:15.880 "base_bdevs_list": [ 00:07:15.880 { 00:07:15.880 "name": "BaseBdev1", 00:07:15.880 "uuid": "b799d55c-bc30-5bb2-879a-5280fd6ef1a4", 00:07:15.880 "is_configured": true, 00:07:15.880 "data_offset": 2048, 00:07:15.880 "data_size": 63488 00:07:15.880 }, 00:07:15.880 { 00:07:15.880 "name": "BaseBdev2", 00:07:15.880 "uuid": "94b66166-5fb3-5b97-b5bc-bacb10845555", 00:07:15.880 "is_configured": true, 00:07:15.880 "data_offset": 2048, 00:07:15.880 "data_size": 63488 00:07:15.880 } 00:07:15.880 ] 00:07:15.880 }' 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.880 11:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.138 11:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:16.138 11:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:16.397 [2024-11-27 11:45:42.603330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:17.333 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:17.333 11:45:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.333 11:45:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.333 11:45:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.333 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:17.333 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:17.333 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:17.333 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:17.333 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:17.334 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:17.334 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.334 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.334 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.334 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.334 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.334 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.334 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.334 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.334 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:17.334 11:45:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.334 11:45:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.334 11:45:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.334 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.334 "name": "raid_bdev1", 00:07:17.334 "uuid": "90b7b8c9-b0b5-4ffa-8c37-d3a5de76b028", 00:07:17.334 "strip_size_kb": 64, 00:07:17.334 "state": "online", 00:07:17.334 "raid_level": "raid0", 00:07:17.334 "superblock": true, 00:07:17.334 "num_base_bdevs": 2, 00:07:17.334 "num_base_bdevs_discovered": 2, 00:07:17.334 "num_base_bdevs_operational": 2, 00:07:17.334 "base_bdevs_list": [ 00:07:17.334 { 00:07:17.334 "name": "BaseBdev1", 00:07:17.334 "uuid": "b799d55c-bc30-5bb2-879a-5280fd6ef1a4", 00:07:17.334 "is_configured": true, 00:07:17.334 "data_offset": 2048, 00:07:17.334 "data_size": 63488 00:07:17.334 }, 00:07:17.334 { 00:07:17.334 "name": "BaseBdev2", 00:07:17.334 "uuid": "94b66166-5fb3-5b97-b5bc-bacb10845555", 00:07:17.334 "is_configured": true, 00:07:17.334 "data_offset": 2048, 00:07:17.334 "data_size": 63488 00:07:17.334 } 00:07:17.334 ] 00:07:17.334 }' 00:07:17.334 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.334 11:45:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.902 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:17.902 11:45:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.902 11:45:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.902 [2024-11-27 11:45:43.983480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:17.902 [2024-11-27 11:45:43.983519] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:17.902 [2024-11-27 11:45:43.986510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.902 [2024-11-27 11:45:43.986555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.902 [2024-11-27 11:45:43.986588] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.902 [2024-11-27 11:45:43.986601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:17.902 { 00:07:17.902 "results": [ 00:07:17.902 { 00:07:17.902 "job": "raid_bdev1", 00:07:17.902 "core_mask": "0x1", 00:07:17.902 "workload": "randrw", 00:07:17.902 "percentage": 50, 00:07:17.902 "status": "finished", 00:07:17.902 "queue_depth": 1, 00:07:17.902 "io_size": 131072, 00:07:17.902 "runtime": 1.381056, 00:07:17.902 "iops": 15535.93771722508, 00:07:17.902 "mibps": 1941.992214653135, 00:07:17.902 "io_failed": 1, 00:07:17.902 "io_timeout": 0, 00:07:17.902 "avg_latency_us": 88.96089259864301, 00:07:17.902 "min_latency_us": 26.1589519650655, 00:07:17.902 "max_latency_us": 1402.2986899563318 00:07:17.902 } 00:07:17.902 ], 00:07:17.902 "core_count": 1 00:07:17.902 } 00:07:17.902 11:45:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.902 11:45:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61514 00:07:17.902 11:45:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61514 ']' 00:07:17.902 11:45:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61514 00:07:17.902 11:45:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:17.902 11:45:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.902 11:45:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61514 00:07:17.902 killing process with pid 61514 00:07:17.902 11:45:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.902 11:45:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.902 11:45:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61514' 00:07:17.902 11:45:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61514 00:07:17.902 [2024-11-27 11:45:44.032098] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.902 11:45:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61514 00:07:17.902 [2024-11-27 11:45:44.167627] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:19.279 11:45:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:19.279 11:45:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Uh1K8Ns6Wz 00:07:19.279 11:45:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:19.279 11:45:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:19.279 11:45:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:19.279 ************************************ 00:07:19.279 END TEST raid_write_error_test 00:07:19.279 ************************************ 00:07:19.279 11:45:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:19.279 11:45:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:19.279 11:45:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:19.279 00:07:19.279 real 0m4.448s 00:07:19.279 user 0m5.372s 00:07:19.279 sys 0m0.572s 00:07:19.279 11:45:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.279 11:45:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.279 11:45:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:19.279 11:45:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:19.279 11:45:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:19.279 11:45:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.279 11:45:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.279 ************************************ 00:07:19.279 START TEST raid_state_function_test 00:07:19.279 ************************************ 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:19.279 Process raid pid: 61656 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61656 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61656' 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61656 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61656 ']' 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.279 11:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.279 [2024-11-27 11:45:45.517873] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:19.279 [2024-11-27 11:45:45.518016] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.538 [2024-11-27 11:45:45.694878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.538 [2024-11-27 11:45:45.808592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.797 [2024-11-27 11:45:46.011856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.797 [2024-11-27 11:45:46.011908] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.057 [2024-11-27 11:45:46.399987] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:20.057 [2024-11-27 11:45:46.400042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:20.057 [2024-11-27 11:45:46.400053] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.057 [2024-11-27 11:45:46.400065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.057 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.317 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.317 "name": "Existed_Raid", 00:07:20.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.317 "strip_size_kb": 64, 00:07:20.317 "state": "configuring", 00:07:20.317 "raid_level": "concat", 00:07:20.317 "superblock": false, 00:07:20.317 "num_base_bdevs": 2, 00:07:20.317 "num_base_bdevs_discovered": 0, 00:07:20.318 "num_base_bdevs_operational": 2, 00:07:20.318 "base_bdevs_list": [ 00:07:20.318 { 00:07:20.318 "name": "BaseBdev1", 00:07:20.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.318 "is_configured": false, 00:07:20.318 "data_offset": 0, 00:07:20.318 "data_size": 0 00:07:20.318 }, 00:07:20.318 { 00:07:20.318 "name": "BaseBdev2", 00:07:20.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.318 "is_configured": false, 00:07:20.318 "data_offset": 0, 00:07:20.318 "data_size": 0 00:07:20.318 } 00:07:20.318 ] 00:07:20.318 }' 00:07:20.318 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.318 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.578 [2024-11-27 11:45:46.783292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:20.578 [2024-11-27 11:45:46.783387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.578 [2024-11-27 11:45:46.795266] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:20.578 [2024-11-27 11:45:46.795356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:20.578 [2024-11-27 11:45:46.795382] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.578 [2024-11-27 11:45:46.795407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.578 [2024-11-27 11:45:46.844633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:20.578 BaseBdev1 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.578 [ 00:07:20.578 { 00:07:20.578 "name": "BaseBdev1", 00:07:20.578 "aliases": [ 00:07:20.578 "6bf068de-7ef0-42f9-acef-9366b28368c5" 00:07:20.578 ], 00:07:20.578 "product_name": "Malloc disk", 00:07:20.578 "block_size": 512, 00:07:20.578 "num_blocks": 65536, 00:07:20.578 "uuid": "6bf068de-7ef0-42f9-acef-9366b28368c5", 00:07:20.578 "assigned_rate_limits": { 00:07:20.578 "rw_ios_per_sec": 0, 00:07:20.578 "rw_mbytes_per_sec": 0, 00:07:20.578 "r_mbytes_per_sec": 0, 00:07:20.578 "w_mbytes_per_sec": 0 00:07:20.578 }, 00:07:20.578 "claimed": true, 00:07:20.578 "claim_type": "exclusive_write", 00:07:20.578 "zoned": false, 00:07:20.578 "supported_io_types": { 00:07:20.578 "read": true, 00:07:20.578 "write": true, 00:07:20.578 "unmap": true, 00:07:20.578 "flush": true, 00:07:20.578 "reset": true, 00:07:20.578 "nvme_admin": false, 00:07:20.578 "nvme_io": false, 00:07:20.578 "nvme_io_md": false, 00:07:20.578 "write_zeroes": true, 00:07:20.578 "zcopy": true, 00:07:20.578 "get_zone_info": false, 00:07:20.578 "zone_management": false, 00:07:20.578 "zone_append": false, 00:07:20.578 "compare": false, 00:07:20.578 "compare_and_write": false, 00:07:20.578 "abort": true, 00:07:20.578 "seek_hole": false, 00:07:20.578 "seek_data": false, 00:07:20.578 "copy": true, 00:07:20.578 "nvme_iov_md": false 00:07:20.578 }, 00:07:20.578 "memory_domains": [ 00:07:20.578 { 00:07:20.578 "dma_device_id": "system", 00:07:20.578 "dma_device_type": 1 00:07:20.578 }, 00:07:20.578 { 00:07:20.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.578 "dma_device_type": 2 00:07:20.578 } 00:07:20.578 ], 00:07:20.578 "driver_specific": {} 00:07:20.578 } 00:07:20.578 ] 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.578 "name": "Existed_Raid", 00:07:20.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.578 "strip_size_kb": 64, 00:07:20.578 "state": "configuring", 00:07:20.578 "raid_level": "concat", 00:07:20.578 "superblock": false, 00:07:20.578 "num_base_bdevs": 2, 00:07:20.578 "num_base_bdevs_discovered": 1, 00:07:20.578 "num_base_bdevs_operational": 2, 00:07:20.578 "base_bdevs_list": [ 00:07:20.578 { 00:07:20.578 "name": "BaseBdev1", 00:07:20.578 "uuid": "6bf068de-7ef0-42f9-acef-9366b28368c5", 00:07:20.578 "is_configured": true, 00:07:20.578 "data_offset": 0, 00:07:20.578 "data_size": 65536 00:07:20.578 }, 00:07:20.578 { 00:07:20.578 "name": "BaseBdev2", 00:07:20.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.578 "is_configured": false, 00:07:20.578 "data_offset": 0, 00:07:20.578 "data_size": 0 00:07:20.578 } 00:07:20.578 ] 00:07:20.578 }' 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.578 11:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.149 [2024-11-27 11:45:47.319902] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:21.149 [2024-11-27 11:45:47.319957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.149 [2024-11-27 11:45:47.331891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.149 [2024-11-27 11:45:47.333740] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.149 [2024-11-27 11:45:47.333789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.149 "name": "Existed_Raid", 00:07:21.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.149 "strip_size_kb": 64, 00:07:21.149 "state": "configuring", 00:07:21.149 "raid_level": "concat", 00:07:21.149 "superblock": false, 00:07:21.149 "num_base_bdevs": 2, 00:07:21.149 "num_base_bdevs_discovered": 1, 00:07:21.149 "num_base_bdevs_operational": 2, 00:07:21.149 "base_bdevs_list": [ 00:07:21.149 { 00:07:21.149 "name": "BaseBdev1", 00:07:21.149 "uuid": "6bf068de-7ef0-42f9-acef-9366b28368c5", 00:07:21.149 "is_configured": true, 00:07:21.149 "data_offset": 0, 00:07:21.149 "data_size": 65536 00:07:21.149 }, 00:07:21.149 { 00:07:21.149 "name": "BaseBdev2", 00:07:21.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.149 "is_configured": false, 00:07:21.149 "data_offset": 0, 00:07:21.149 "data_size": 0 00:07:21.149 } 00:07:21.149 ] 00:07:21.149 }' 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.149 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.409 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:21.409 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.409 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.409 [2024-11-27 11:45:47.780065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:21.409 [2024-11-27 11:45:47.780200] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:21.409 [2024-11-27 11:45:47.780230] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:21.409 [2024-11-27 11:45:47.780575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:21.409 [2024-11-27 11:45:47.780829] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:21.409 [2024-11-27 11:45:47.780894] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:21.409 [2024-11-27 11:45:47.781212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.409 BaseBdev2 00:07:21.409 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.409 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:21.409 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:21.409 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:21.409 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:21.409 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:21.409 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:21.409 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:21.409 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.409 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.668 [ 00:07:21.668 { 00:07:21.668 "name": "BaseBdev2", 00:07:21.668 "aliases": [ 00:07:21.668 "e15e35d9-3803-4bf1-949f-7b1aeb31616e" 00:07:21.668 ], 00:07:21.668 "product_name": "Malloc disk", 00:07:21.668 "block_size": 512, 00:07:21.668 "num_blocks": 65536, 00:07:21.668 "uuid": "e15e35d9-3803-4bf1-949f-7b1aeb31616e", 00:07:21.668 "assigned_rate_limits": { 00:07:21.668 "rw_ios_per_sec": 0, 00:07:21.668 "rw_mbytes_per_sec": 0, 00:07:21.668 "r_mbytes_per_sec": 0, 00:07:21.668 "w_mbytes_per_sec": 0 00:07:21.668 }, 00:07:21.668 "claimed": true, 00:07:21.668 "claim_type": "exclusive_write", 00:07:21.668 "zoned": false, 00:07:21.668 "supported_io_types": { 00:07:21.668 "read": true, 00:07:21.668 "write": true, 00:07:21.668 "unmap": true, 00:07:21.668 "flush": true, 00:07:21.668 "reset": true, 00:07:21.668 "nvme_admin": false, 00:07:21.668 "nvme_io": false, 00:07:21.668 "nvme_io_md": false, 00:07:21.668 "write_zeroes": true, 00:07:21.668 "zcopy": true, 00:07:21.668 "get_zone_info": false, 00:07:21.668 "zone_management": false, 00:07:21.668 "zone_append": false, 00:07:21.668 "compare": false, 00:07:21.668 "compare_and_write": false, 00:07:21.668 "abort": true, 00:07:21.668 "seek_hole": false, 00:07:21.668 "seek_data": false, 00:07:21.668 "copy": true, 00:07:21.668 "nvme_iov_md": false 00:07:21.668 }, 00:07:21.668 "memory_domains": [ 00:07:21.668 { 00:07:21.668 "dma_device_id": "system", 00:07:21.668 "dma_device_type": 1 00:07:21.668 }, 00:07:21.668 { 00:07:21.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.668 "dma_device_type": 2 00:07:21.668 } 00:07:21.668 ], 00:07:21.668 "driver_specific": {} 00:07:21.668 } 00:07:21.668 ] 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.668 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.668 "name": "Existed_Raid", 00:07:21.668 "uuid": "eaf62dbf-3d3b-4ab2-b8ee-bc50f798977d", 00:07:21.668 "strip_size_kb": 64, 00:07:21.668 "state": "online", 00:07:21.668 "raid_level": "concat", 00:07:21.668 "superblock": false, 00:07:21.668 "num_base_bdevs": 2, 00:07:21.668 "num_base_bdevs_discovered": 2, 00:07:21.668 "num_base_bdevs_operational": 2, 00:07:21.668 "base_bdevs_list": [ 00:07:21.668 { 00:07:21.668 "name": "BaseBdev1", 00:07:21.668 "uuid": "6bf068de-7ef0-42f9-acef-9366b28368c5", 00:07:21.668 "is_configured": true, 00:07:21.668 "data_offset": 0, 00:07:21.669 "data_size": 65536 00:07:21.669 }, 00:07:21.669 { 00:07:21.669 "name": "BaseBdev2", 00:07:21.669 "uuid": "e15e35d9-3803-4bf1-949f-7b1aeb31616e", 00:07:21.669 "is_configured": true, 00:07:21.669 "data_offset": 0, 00:07:21.669 "data_size": 65536 00:07:21.669 } 00:07:21.669 ] 00:07:21.669 }' 00:07:21.669 11:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.669 11:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.928 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:21.928 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:21.928 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:21.928 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:21.928 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:21.928 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:21.928 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:21.928 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:21.928 11:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.928 11:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.928 [2024-11-27 11:45:48.271567] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.928 11:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.928 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:21.928 "name": "Existed_Raid", 00:07:21.928 "aliases": [ 00:07:21.928 "eaf62dbf-3d3b-4ab2-b8ee-bc50f798977d" 00:07:21.928 ], 00:07:21.928 "product_name": "Raid Volume", 00:07:21.928 "block_size": 512, 00:07:21.928 "num_blocks": 131072, 00:07:21.928 "uuid": "eaf62dbf-3d3b-4ab2-b8ee-bc50f798977d", 00:07:21.928 "assigned_rate_limits": { 00:07:21.928 "rw_ios_per_sec": 0, 00:07:21.928 "rw_mbytes_per_sec": 0, 00:07:21.928 "r_mbytes_per_sec": 0, 00:07:21.928 "w_mbytes_per_sec": 0 00:07:21.928 }, 00:07:21.928 "claimed": false, 00:07:21.928 "zoned": false, 00:07:21.928 "supported_io_types": { 00:07:21.928 "read": true, 00:07:21.928 "write": true, 00:07:21.928 "unmap": true, 00:07:21.928 "flush": true, 00:07:21.928 "reset": true, 00:07:21.928 "nvme_admin": false, 00:07:21.928 "nvme_io": false, 00:07:21.928 "nvme_io_md": false, 00:07:21.928 "write_zeroes": true, 00:07:21.928 "zcopy": false, 00:07:21.928 "get_zone_info": false, 00:07:21.928 "zone_management": false, 00:07:21.928 "zone_append": false, 00:07:21.928 "compare": false, 00:07:21.928 "compare_and_write": false, 00:07:21.928 "abort": false, 00:07:21.928 "seek_hole": false, 00:07:21.928 "seek_data": false, 00:07:21.928 "copy": false, 00:07:21.928 "nvme_iov_md": false 00:07:21.928 }, 00:07:21.928 "memory_domains": [ 00:07:21.928 { 00:07:21.928 "dma_device_id": "system", 00:07:21.929 "dma_device_type": 1 00:07:21.929 }, 00:07:21.929 { 00:07:21.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.929 "dma_device_type": 2 00:07:21.929 }, 00:07:21.929 { 00:07:21.929 "dma_device_id": "system", 00:07:21.929 "dma_device_type": 1 00:07:21.929 }, 00:07:21.929 { 00:07:21.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.929 "dma_device_type": 2 00:07:21.929 } 00:07:21.929 ], 00:07:21.929 "driver_specific": { 00:07:21.929 "raid": { 00:07:21.929 "uuid": "eaf62dbf-3d3b-4ab2-b8ee-bc50f798977d", 00:07:21.929 "strip_size_kb": 64, 00:07:21.929 "state": "online", 00:07:21.929 "raid_level": "concat", 00:07:21.929 "superblock": false, 00:07:21.929 "num_base_bdevs": 2, 00:07:21.929 "num_base_bdevs_discovered": 2, 00:07:21.929 "num_base_bdevs_operational": 2, 00:07:21.929 "base_bdevs_list": [ 00:07:21.929 { 00:07:21.929 "name": "BaseBdev1", 00:07:21.929 "uuid": "6bf068de-7ef0-42f9-acef-9366b28368c5", 00:07:21.929 "is_configured": true, 00:07:21.929 "data_offset": 0, 00:07:21.929 "data_size": 65536 00:07:21.929 }, 00:07:21.929 { 00:07:21.929 "name": "BaseBdev2", 00:07:21.929 "uuid": "e15e35d9-3803-4bf1-949f-7b1aeb31616e", 00:07:21.929 "is_configured": true, 00:07:21.929 "data_offset": 0, 00:07:21.929 "data_size": 65536 00:07:21.929 } 00:07:21.929 ] 00:07:21.929 } 00:07:21.929 } 00:07:21.929 }' 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:22.189 BaseBdev2' 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.189 11:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.189 [2024-11-27 11:45:48.475003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:22.189 [2024-11-27 11:45:48.475041] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:22.189 [2024-11-27 11:45:48.475097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.448 "name": "Existed_Raid", 00:07:22.448 "uuid": "eaf62dbf-3d3b-4ab2-b8ee-bc50f798977d", 00:07:22.448 "strip_size_kb": 64, 00:07:22.448 "state": "offline", 00:07:22.448 "raid_level": "concat", 00:07:22.448 "superblock": false, 00:07:22.448 "num_base_bdevs": 2, 00:07:22.448 "num_base_bdevs_discovered": 1, 00:07:22.448 "num_base_bdevs_operational": 1, 00:07:22.448 "base_bdevs_list": [ 00:07:22.448 { 00:07:22.448 "name": null, 00:07:22.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.448 "is_configured": false, 00:07:22.448 "data_offset": 0, 00:07:22.448 "data_size": 65536 00:07:22.448 }, 00:07:22.448 { 00:07:22.448 "name": "BaseBdev2", 00:07:22.448 "uuid": "e15e35d9-3803-4bf1-949f-7b1aeb31616e", 00:07:22.448 "is_configured": true, 00:07:22.448 "data_offset": 0, 00:07:22.448 "data_size": 65536 00:07:22.448 } 00:07:22.448 ] 00:07:22.448 }' 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.448 11:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.708 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:22.708 11:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:22.708 11:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:22.708 11:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.708 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.708 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.708 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.708 11:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:22.708 11:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:22.708 11:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:22.708 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.708 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.708 [2024-11-27 11:45:49.058652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:22.708 [2024-11-27 11:45:49.058784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:22.978 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.978 11:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:22.978 11:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:22.978 11:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.978 11:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:22.978 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.978 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.978 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.978 11:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:22.978 11:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:22.979 11:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:22.979 11:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61656 00:07:22.979 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61656 ']' 00:07:22.979 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61656 00:07:22.979 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:22.979 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.979 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61656 00:07:22.979 killing process with pid 61656 00:07:22.979 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.979 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.979 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61656' 00:07:22.979 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61656 00:07:22.979 [2024-11-27 11:45:49.252921] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:22.979 11:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61656 00:07:22.979 [2024-11-27 11:45:49.269234] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:24.365 11:45:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:24.365 00:07:24.365 real 0m4.998s 00:07:24.365 user 0m7.195s 00:07:24.365 sys 0m0.778s 00:07:24.365 11:45:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.365 ************************************ 00:07:24.365 END TEST raid_state_function_test 00:07:24.365 ************************************ 00:07:24.365 11:45:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.365 11:45:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:24.365 11:45:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:24.365 11:45:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.365 11:45:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:24.365 ************************************ 00:07:24.365 START TEST raid_state_function_test_sb 00:07:24.365 ************************************ 00:07:24.365 11:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61905 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61905' 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:24.366 Process raid pid: 61905 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61905 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61905 ']' 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.366 11:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.366 [2024-11-27 11:45:50.593113] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:24.366 [2024-11-27 11:45:50.593317] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.625 [2024-11-27 11:45:50.769665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.625 [2024-11-27 11:45:50.892585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.884 [2024-11-27 11:45:51.106894] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.884 [2024-11-27 11:45:51.106943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.144 [2024-11-27 11:45:51.438365] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:25.144 [2024-11-27 11:45:51.438419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:25.144 [2024-11-27 11:45:51.438430] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.144 [2024-11-27 11:45:51.438439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.144 "name": "Existed_Raid", 00:07:25.144 "uuid": "349bc781-731f-439d-a377-78f84b290333", 00:07:25.144 "strip_size_kb": 64, 00:07:25.144 "state": "configuring", 00:07:25.144 "raid_level": "concat", 00:07:25.144 "superblock": true, 00:07:25.144 "num_base_bdevs": 2, 00:07:25.144 "num_base_bdevs_discovered": 0, 00:07:25.144 "num_base_bdevs_operational": 2, 00:07:25.144 "base_bdevs_list": [ 00:07:25.144 { 00:07:25.144 "name": "BaseBdev1", 00:07:25.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.144 "is_configured": false, 00:07:25.144 "data_offset": 0, 00:07:25.144 "data_size": 0 00:07:25.144 }, 00:07:25.144 { 00:07:25.144 "name": "BaseBdev2", 00:07:25.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.144 "is_configured": false, 00:07:25.144 "data_offset": 0, 00:07:25.144 "data_size": 0 00:07:25.144 } 00:07:25.144 ] 00:07:25.144 }' 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.144 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.711 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:25.711 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.711 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.711 [2024-11-27 11:45:51.853583] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:25.711 [2024-11-27 11:45:51.853668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:25.711 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.711 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:25.711 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.711 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.712 [2024-11-27 11:45:51.865579] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:25.712 [2024-11-27 11:45:51.865665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:25.712 [2024-11-27 11:45:51.865707] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.712 [2024-11-27 11:45:51.865737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.712 [2024-11-27 11:45:51.915379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:25.712 BaseBdev1 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.712 [ 00:07:25.712 { 00:07:25.712 "name": "BaseBdev1", 00:07:25.712 "aliases": [ 00:07:25.712 "9c18315c-da63-43a5-a21a-06890a647833" 00:07:25.712 ], 00:07:25.712 "product_name": "Malloc disk", 00:07:25.712 "block_size": 512, 00:07:25.712 "num_blocks": 65536, 00:07:25.712 "uuid": "9c18315c-da63-43a5-a21a-06890a647833", 00:07:25.712 "assigned_rate_limits": { 00:07:25.712 "rw_ios_per_sec": 0, 00:07:25.712 "rw_mbytes_per_sec": 0, 00:07:25.712 "r_mbytes_per_sec": 0, 00:07:25.712 "w_mbytes_per_sec": 0 00:07:25.712 }, 00:07:25.712 "claimed": true, 00:07:25.712 "claim_type": "exclusive_write", 00:07:25.712 "zoned": false, 00:07:25.712 "supported_io_types": { 00:07:25.712 "read": true, 00:07:25.712 "write": true, 00:07:25.712 "unmap": true, 00:07:25.712 "flush": true, 00:07:25.712 "reset": true, 00:07:25.712 "nvme_admin": false, 00:07:25.712 "nvme_io": false, 00:07:25.712 "nvme_io_md": false, 00:07:25.712 "write_zeroes": true, 00:07:25.712 "zcopy": true, 00:07:25.712 "get_zone_info": false, 00:07:25.712 "zone_management": false, 00:07:25.712 "zone_append": false, 00:07:25.712 "compare": false, 00:07:25.712 "compare_and_write": false, 00:07:25.712 "abort": true, 00:07:25.712 "seek_hole": false, 00:07:25.712 "seek_data": false, 00:07:25.712 "copy": true, 00:07:25.712 "nvme_iov_md": false 00:07:25.712 }, 00:07:25.712 "memory_domains": [ 00:07:25.712 { 00:07:25.712 "dma_device_id": "system", 00:07:25.712 "dma_device_type": 1 00:07:25.712 }, 00:07:25.712 { 00:07:25.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.712 "dma_device_type": 2 00:07:25.712 } 00:07:25.712 ], 00:07:25.712 "driver_specific": {} 00:07:25.712 } 00:07:25.712 ] 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.712 11:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.712 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.712 "name": "Existed_Raid", 00:07:25.712 "uuid": "1130f150-0eb7-4a95-8f8f-465272d200c4", 00:07:25.712 "strip_size_kb": 64, 00:07:25.712 "state": "configuring", 00:07:25.712 "raid_level": "concat", 00:07:25.712 "superblock": true, 00:07:25.712 "num_base_bdevs": 2, 00:07:25.712 "num_base_bdevs_discovered": 1, 00:07:25.712 "num_base_bdevs_operational": 2, 00:07:25.712 "base_bdevs_list": [ 00:07:25.712 { 00:07:25.712 "name": "BaseBdev1", 00:07:25.712 "uuid": "9c18315c-da63-43a5-a21a-06890a647833", 00:07:25.712 "is_configured": true, 00:07:25.712 "data_offset": 2048, 00:07:25.712 "data_size": 63488 00:07:25.712 }, 00:07:25.712 { 00:07:25.712 "name": "BaseBdev2", 00:07:25.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.712 "is_configured": false, 00:07:25.712 "data_offset": 0, 00:07:25.712 "data_size": 0 00:07:25.712 } 00:07:25.712 ] 00:07:25.712 }' 00:07:25.712 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.712 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.281 [2024-11-27 11:45:52.394648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:26.281 [2024-11-27 11:45:52.394769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.281 [2024-11-27 11:45:52.402702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.281 [2024-11-27 11:45:52.404798] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.281 [2024-11-27 11:45:52.404854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.281 "name": "Existed_Raid", 00:07:26.281 "uuid": "58326e42-fb61-487c-af19-dfc39bb42c7f", 00:07:26.281 "strip_size_kb": 64, 00:07:26.281 "state": "configuring", 00:07:26.281 "raid_level": "concat", 00:07:26.281 "superblock": true, 00:07:26.281 "num_base_bdevs": 2, 00:07:26.281 "num_base_bdevs_discovered": 1, 00:07:26.281 "num_base_bdevs_operational": 2, 00:07:26.281 "base_bdevs_list": [ 00:07:26.281 { 00:07:26.281 "name": "BaseBdev1", 00:07:26.281 "uuid": "9c18315c-da63-43a5-a21a-06890a647833", 00:07:26.281 "is_configured": true, 00:07:26.281 "data_offset": 2048, 00:07:26.281 "data_size": 63488 00:07:26.281 }, 00:07:26.281 { 00:07:26.281 "name": "BaseBdev2", 00:07:26.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.281 "is_configured": false, 00:07:26.281 "data_offset": 0, 00:07:26.281 "data_size": 0 00:07:26.281 } 00:07:26.281 ] 00:07:26.281 }' 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.281 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.574 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:26.574 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.574 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.574 [2024-11-27 11:45:52.900273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:26.574 [2024-11-27 11:45:52.900517] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:26.574 [2024-11-27 11:45:52.900533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:26.574 [2024-11-27 11:45:52.900791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:26.574 BaseBdev2 00:07:26.574 [2024-11-27 11:45:52.900981] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:26.574 [2024-11-27 11:45:52.901002] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:26.574 [2024-11-27 11:45:52.901155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.574 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.574 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:26.574 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:26.574 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.575 [ 00:07:26.575 { 00:07:26.575 "name": "BaseBdev2", 00:07:26.575 "aliases": [ 00:07:26.575 "e42d994f-be5d-4dfe-bea8-9c4c2bbedd2a" 00:07:26.575 ], 00:07:26.575 "product_name": "Malloc disk", 00:07:26.575 "block_size": 512, 00:07:26.575 "num_blocks": 65536, 00:07:26.575 "uuid": "e42d994f-be5d-4dfe-bea8-9c4c2bbedd2a", 00:07:26.575 "assigned_rate_limits": { 00:07:26.575 "rw_ios_per_sec": 0, 00:07:26.575 "rw_mbytes_per_sec": 0, 00:07:26.575 "r_mbytes_per_sec": 0, 00:07:26.575 "w_mbytes_per_sec": 0 00:07:26.575 }, 00:07:26.575 "claimed": true, 00:07:26.575 "claim_type": "exclusive_write", 00:07:26.575 "zoned": false, 00:07:26.575 "supported_io_types": { 00:07:26.575 "read": true, 00:07:26.575 "write": true, 00:07:26.575 "unmap": true, 00:07:26.575 "flush": true, 00:07:26.575 "reset": true, 00:07:26.575 "nvme_admin": false, 00:07:26.575 "nvme_io": false, 00:07:26.575 "nvme_io_md": false, 00:07:26.575 "write_zeroes": true, 00:07:26.575 "zcopy": true, 00:07:26.575 "get_zone_info": false, 00:07:26.575 "zone_management": false, 00:07:26.575 "zone_append": false, 00:07:26.575 "compare": false, 00:07:26.575 "compare_and_write": false, 00:07:26.575 "abort": true, 00:07:26.575 "seek_hole": false, 00:07:26.575 "seek_data": false, 00:07:26.575 "copy": true, 00:07:26.575 "nvme_iov_md": false 00:07:26.575 }, 00:07:26.575 "memory_domains": [ 00:07:26.575 { 00:07:26.575 "dma_device_id": "system", 00:07:26.575 "dma_device_type": 1 00:07:26.575 }, 00:07:26.575 { 00:07:26.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.575 "dma_device_type": 2 00:07:26.575 } 00:07:26.575 ], 00:07:26.575 "driver_specific": {} 00:07:26.575 } 00:07:26.575 ] 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.575 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.859 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.859 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.859 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.859 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.859 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.859 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.859 "name": "Existed_Raid", 00:07:26.859 "uuid": "58326e42-fb61-487c-af19-dfc39bb42c7f", 00:07:26.859 "strip_size_kb": 64, 00:07:26.859 "state": "online", 00:07:26.859 "raid_level": "concat", 00:07:26.859 "superblock": true, 00:07:26.859 "num_base_bdevs": 2, 00:07:26.859 "num_base_bdevs_discovered": 2, 00:07:26.859 "num_base_bdevs_operational": 2, 00:07:26.859 "base_bdevs_list": [ 00:07:26.859 { 00:07:26.859 "name": "BaseBdev1", 00:07:26.859 "uuid": "9c18315c-da63-43a5-a21a-06890a647833", 00:07:26.859 "is_configured": true, 00:07:26.859 "data_offset": 2048, 00:07:26.859 "data_size": 63488 00:07:26.859 }, 00:07:26.859 { 00:07:26.859 "name": "BaseBdev2", 00:07:26.859 "uuid": "e42d994f-be5d-4dfe-bea8-9c4c2bbedd2a", 00:07:26.859 "is_configured": true, 00:07:26.859 "data_offset": 2048, 00:07:26.859 "data_size": 63488 00:07:26.859 } 00:07:26.859 ] 00:07:26.859 }' 00:07:26.859 11:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.859 11:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.118 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:27.118 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:27.118 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:27.118 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:27.118 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:27.118 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:27.118 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:27.118 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:27.118 11:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.118 11:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.118 [2024-11-27 11:45:53.400056] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.118 11:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.118 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:27.118 "name": "Existed_Raid", 00:07:27.118 "aliases": [ 00:07:27.118 "58326e42-fb61-487c-af19-dfc39bb42c7f" 00:07:27.118 ], 00:07:27.118 "product_name": "Raid Volume", 00:07:27.118 "block_size": 512, 00:07:27.118 "num_blocks": 126976, 00:07:27.118 "uuid": "58326e42-fb61-487c-af19-dfc39bb42c7f", 00:07:27.118 "assigned_rate_limits": { 00:07:27.118 "rw_ios_per_sec": 0, 00:07:27.118 "rw_mbytes_per_sec": 0, 00:07:27.118 "r_mbytes_per_sec": 0, 00:07:27.118 "w_mbytes_per_sec": 0 00:07:27.118 }, 00:07:27.118 "claimed": false, 00:07:27.118 "zoned": false, 00:07:27.118 "supported_io_types": { 00:07:27.118 "read": true, 00:07:27.118 "write": true, 00:07:27.118 "unmap": true, 00:07:27.118 "flush": true, 00:07:27.118 "reset": true, 00:07:27.118 "nvme_admin": false, 00:07:27.118 "nvme_io": false, 00:07:27.118 "nvme_io_md": false, 00:07:27.118 "write_zeroes": true, 00:07:27.118 "zcopy": false, 00:07:27.118 "get_zone_info": false, 00:07:27.118 "zone_management": false, 00:07:27.118 "zone_append": false, 00:07:27.118 "compare": false, 00:07:27.118 "compare_and_write": false, 00:07:27.118 "abort": false, 00:07:27.118 "seek_hole": false, 00:07:27.118 "seek_data": false, 00:07:27.118 "copy": false, 00:07:27.118 "nvme_iov_md": false 00:07:27.118 }, 00:07:27.118 "memory_domains": [ 00:07:27.118 { 00:07:27.118 "dma_device_id": "system", 00:07:27.118 "dma_device_type": 1 00:07:27.118 }, 00:07:27.118 { 00:07:27.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.118 "dma_device_type": 2 00:07:27.118 }, 00:07:27.118 { 00:07:27.118 "dma_device_id": "system", 00:07:27.118 "dma_device_type": 1 00:07:27.118 }, 00:07:27.118 { 00:07:27.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.118 "dma_device_type": 2 00:07:27.118 } 00:07:27.118 ], 00:07:27.118 "driver_specific": { 00:07:27.118 "raid": { 00:07:27.118 "uuid": "58326e42-fb61-487c-af19-dfc39bb42c7f", 00:07:27.118 "strip_size_kb": 64, 00:07:27.118 "state": "online", 00:07:27.118 "raid_level": "concat", 00:07:27.118 "superblock": true, 00:07:27.118 "num_base_bdevs": 2, 00:07:27.118 "num_base_bdevs_discovered": 2, 00:07:27.118 "num_base_bdevs_operational": 2, 00:07:27.118 "base_bdevs_list": [ 00:07:27.118 { 00:07:27.118 "name": "BaseBdev1", 00:07:27.118 "uuid": "9c18315c-da63-43a5-a21a-06890a647833", 00:07:27.118 "is_configured": true, 00:07:27.118 "data_offset": 2048, 00:07:27.118 "data_size": 63488 00:07:27.118 }, 00:07:27.118 { 00:07:27.118 "name": "BaseBdev2", 00:07:27.118 "uuid": "e42d994f-be5d-4dfe-bea8-9c4c2bbedd2a", 00:07:27.118 "is_configured": true, 00:07:27.118 "data_offset": 2048, 00:07:27.118 "data_size": 63488 00:07:27.118 } 00:07:27.118 ] 00:07:27.118 } 00:07:27.118 } 00:07:27.118 }' 00:07:27.118 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:27.118 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:27.118 BaseBdev2' 00:07:27.118 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.378 [2024-11-27 11:45:53.635781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:27.378 [2024-11-27 11:45:53.635826] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.378 [2024-11-27 11:45:53.635898] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.378 11:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.638 11:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.638 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.638 "name": "Existed_Raid", 00:07:27.638 "uuid": "58326e42-fb61-487c-af19-dfc39bb42c7f", 00:07:27.638 "strip_size_kb": 64, 00:07:27.638 "state": "offline", 00:07:27.638 "raid_level": "concat", 00:07:27.638 "superblock": true, 00:07:27.638 "num_base_bdevs": 2, 00:07:27.638 "num_base_bdevs_discovered": 1, 00:07:27.638 "num_base_bdevs_operational": 1, 00:07:27.638 "base_bdevs_list": [ 00:07:27.638 { 00:07:27.638 "name": null, 00:07:27.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.638 "is_configured": false, 00:07:27.638 "data_offset": 0, 00:07:27.638 "data_size": 63488 00:07:27.638 }, 00:07:27.638 { 00:07:27.638 "name": "BaseBdev2", 00:07:27.638 "uuid": "e42d994f-be5d-4dfe-bea8-9c4c2bbedd2a", 00:07:27.638 "is_configured": true, 00:07:27.638 "data_offset": 2048, 00:07:27.638 "data_size": 63488 00:07:27.638 } 00:07:27.638 ] 00:07:27.638 }' 00:07:27.638 11:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.638 11:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.897 11:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:27.897 11:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:27.897 11:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:27.897 11:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.897 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.897 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.897 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.897 11:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:27.897 11:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:27.897 11:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:27.897 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.897 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.897 [2024-11-27 11:45:54.246935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:27.897 [2024-11-27 11:45:54.246989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:28.156 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.156 11:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:28.156 11:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:28.156 11:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.156 11:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:28.156 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.156 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.156 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.156 11:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:28.156 11:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:28.156 11:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:28.156 11:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61905 00:07:28.156 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61905 ']' 00:07:28.157 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61905 00:07:28.157 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:28.157 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.157 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61905 00:07:28.157 killing process with pid 61905 00:07:28.157 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.157 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.157 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61905' 00:07:28.157 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61905 00:07:28.157 [2024-11-27 11:45:54.434582] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.157 11:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61905 00:07:28.157 [2024-11-27 11:45:54.451429] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:29.537 11:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:29.537 ************************************ 00:07:29.537 END TEST raid_state_function_test_sb 00:07:29.537 ************************************ 00:07:29.537 00:07:29.537 real 0m5.059s 00:07:29.537 user 0m7.340s 00:07:29.537 sys 0m0.791s 00:07:29.537 11:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.537 11:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.537 11:45:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:29.537 11:45:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:29.537 11:45:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.537 11:45:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:29.537 ************************************ 00:07:29.537 START TEST raid_superblock_test 00:07:29.537 ************************************ 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62157 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62157 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62157 ']' 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.537 11:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.537 [2024-11-27 11:45:55.712033] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:29.537 [2024-11-27 11:45:55.712236] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62157 ] 00:07:29.537 [2024-11-27 11:45:55.867923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.797 [2024-11-27 11:45:55.983537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.057 [2024-11-27 11:45:56.180571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.057 [2024-11-27 11:45:56.180605] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.316 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.316 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:30.316 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:30.316 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:30.316 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:30.316 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:30.316 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:30.316 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:30.316 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:30.316 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:30.316 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:30.316 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.316 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.316 malloc1 00:07:30.316 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.316 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:30.316 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.316 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.316 [2024-11-27 11:45:56.588577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:30.316 [2024-11-27 11:45:56.588944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.316 [2024-11-27 11:45:56.588986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:30.316 [2024-11-27 11:45:56.588999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.316 [2024-11-27 11:45:56.591318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.316 [2024-11-27 11:45:56.591350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:30.316 pt1 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.317 malloc2 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.317 [2024-11-27 11:45:56.647613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:30.317 [2024-11-27 11:45:56.647678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.317 [2024-11-27 11:45:56.647707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:30.317 [2024-11-27 11:45:56.647717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.317 [2024-11-27 11:45:56.650049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.317 [2024-11-27 11:45:56.650083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:30.317 pt2 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.317 [2024-11-27 11:45:56.659641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:30.317 [2024-11-27 11:45:56.661535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:30.317 [2024-11-27 11:45:56.661698] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:30.317 [2024-11-27 11:45:56.661709] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:30.317 [2024-11-27 11:45:56.661989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:30.317 [2024-11-27 11:45:56.662165] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:30.317 [2024-11-27 11:45:56.662184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:30.317 [2024-11-27 11:45:56.662336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.317 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.577 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.577 "name": "raid_bdev1", 00:07:30.577 "uuid": "a1821c7b-6bb4-4411-a0d9-eadc0b99a0c7", 00:07:30.577 "strip_size_kb": 64, 00:07:30.577 "state": "online", 00:07:30.577 "raid_level": "concat", 00:07:30.577 "superblock": true, 00:07:30.577 "num_base_bdevs": 2, 00:07:30.577 "num_base_bdevs_discovered": 2, 00:07:30.577 "num_base_bdevs_operational": 2, 00:07:30.577 "base_bdevs_list": [ 00:07:30.577 { 00:07:30.577 "name": "pt1", 00:07:30.577 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:30.577 "is_configured": true, 00:07:30.577 "data_offset": 2048, 00:07:30.577 "data_size": 63488 00:07:30.577 }, 00:07:30.577 { 00:07:30.577 "name": "pt2", 00:07:30.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.577 "is_configured": true, 00:07:30.577 "data_offset": 2048, 00:07:30.577 "data_size": 63488 00:07:30.577 } 00:07:30.577 ] 00:07:30.577 }' 00:07:30.578 11:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.578 11:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.837 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:30.837 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:30.837 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:30.837 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:30.837 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:30.837 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:30.837 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:30.837 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:30.837 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.837 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.837 [2024-11-27 11:45:57.151085] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.837 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.837 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:30.837 "name": "raid_bdev1", 00:07:30.837 "aliases": [ 00:07:30.837 "a1821c7b-6bb4-4411-a0d9-eadc0b99a0c7" 00:07:30.837 ], 00:07:30.837 "product_name": "Raid Volume", 00:07:30.837 "block_size": 512, 00:07:30.837 "num_blocks": 126976, 00:07:30.837 "uuid": "a1821c7b-6bb4-4411-a0d9-eadc0b99a0c7", 00:07:30.837 "assigned_rate_limits": { 00:07:30.837 "rw_ios_per_sec": 0, 00:07:30.837 "rw_mbytes_per_sec": 0, 00:07:30.837 "r_mbytes_per_sec": 0, 00:07:30.837 "w_mbytes_per_sec": 0 00:07:30.837 }, 00:07:30.837 "claimed": false, 00:07:30.837 "zoned": false, 00:07:30.837 "supported_io_types": { 00:07:30.837 "read": true, 00:07:30.837 "write": true, 00:07:30.837 "unmap": true, 00:07:30.837 "flush": true, 00:07:30.837 "reset": true, 00:07:30.837 "nvme_admin": false, 00:07:30.837 "nvme_io": false, 00:07:30.837 "nvme_io_md": false, 00:07:30.837 "write_zeroes": true, 00:07:30.837 "zcopy": false, 00:07:30.837 "get_zone_info": false, 00:07:30.837 "zone_management": false, 00:07:30.837 "zone_append": false, 00:07:30.837 "compare": false, 00:07:30.837 "compare_and_write": false, 00:07:30.837 "abort": false, 00:07:30.837 "seek_hole": false, 00:07:30.837 "seek_data": false, 00:07:30.837 "copy": false, 00:07:30.837 "nvme_iov_md": false 00:07:30.837 }, 00:07:30.837 "memory_domains": [ 00:07:30.837 { 00:07:30.837 "dma_device_id": "system", 00:07:30.837 "dma_device_type": 1 00:07:30.837 }, 00:07:30.837 { 00:07:30.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.837 "dma_device_type": 2 00:07:30.837 }, 00:07:30.837 { 00:07:30.837 "dma_device_id": "system", 00:07:30.837 "dma_device_type": 1 00:07:30.837 }, 00:07:30.837 { 00:07:30.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.837 "dma_device_type": 2 00:07:30.837 } 00:07:30.837 ], 00:07:30.837 "driver_specific": { 00:07:30.837 "raid": { 00:07:30.837 "uuid": "a1821c7b-6bb4-4411-a0d9-eadc0b99a0c7", 00:07:30.837 "strip_size_kb": 64, 00:07:30.837 "state": "online", 00:07:30.837 "raid_level": "concat", 00:07:30.837 "superblock": true, 00:07:30.837 "num_base_bdevs": 2, 00:07:30.837 "num_base_bdevs_discovered": 2, 00:07:30.837 "num_base_bdevs_operational": 2, 00:07:30.837 "base_bdevs_list": [ 00:07:30.837 { 00:07:30.837 "name": "pt1", 00:07:30.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:30.838 "is_configured": true, 00:07:30.838 "data_offset": 2048, 00:07:30.838 "data_size": 63488 00:07:30.838 }, 00:07:30.838 { 00:07:30.838 "name": "pt2", 00:07:30.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.838 "is_configured": true, 00:07:30.838 "data_offset": 2048, 00:07:30.838 "data_size": 63488 00:07:30.838 } 00:07:30.838 ] 00:07:30.838 } 00:07:30.838 } 00:07:30.838 }' 00:07:30.838 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:31.098 pt2' 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.098 [2024-11-27 11:45:57.386562] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a1821c7b-6bb4-4411-a0d9-eadc0b99a0c7 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a1821c7b-6bb4-4411-a0d9-eadc0b99a0c7 ']' 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.098 [2024-11-27 11:45:57.430214] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:31.098 [2024-11-27 11:45:57.430240] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:31.098 [2024-11-27 11:45:57.430316] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.098 [2024-11-27 11:45:57.430363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:31.098 [2024-11-27 11:45:57.430383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.098 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.358 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.358 [2024-11-27 11:45:57.554100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:31.358 [2024-11-27 11:45:57.556065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:31.358 [2024-11-27 11:45:57.556140] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:31.358 [2024-11-27 11:45:57.556214] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:31.358 [2024-11-27 11:45:57.556240] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:31.358 [2024-11-27 11:45:57.556253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:31.358 request: 00:07:31.358 { 00:07:31.358 "name": "raid_bdev1", 00:07:31.358 "raid_level": "concat", 00:07:31.358 "base_bdevs": [ 00:07:31.358 "malloc1", 00:07:31.358 "malloc2" 00:07:31.359 ], 00:07:31.359 "strip_size_kb": 64, 00:07:31.359 "superblock": false, 00:07:31.359 "method": "bdev_raid_create", 00:07:31.359 "req_id": 1 00:07:31.359 } 00:07:31.359 Got JSON-RPC error response 00:07:31.359 response: 00:07:31.359 { 00:07:31.359 "code": -17, 00:07:31.359 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:31.359 } 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.359 [2024-11-27 11:45:57.605942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:31.359 [2024-11-27 11:45:57.605991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.359 [2024-11-27 11:45:57.606007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:31.359 [2024-11-27 11:45:57.606016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.359 [2024-11-27 11:45:57.608155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.359 [2024-11-27 11:45:57.608193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:31.359 [2024-11-27 11:45:57.608266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:31.359 [2024-11-27 11:45:57.608335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:31.359 pt1 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.359 "name": "raid_bdev1", 00:07:31.359 "uuid": "a1821c7b-6bb4-4411-a0d9-eadc0b99a0c7", 00:07:31.359 "strip_size_kb": 64, 00:07:31.359 "state": "configuring", 00:07:31.359 "raid_level": "concat", 00:07:31.359 "superblock": true, 00:07:31.359 "num_base_bdevs": 2, 00:07:31.359 "num_base_bdevs_discovered": 1, 00:07:31.359 "num_base_bdevs_operational": 2, 00:07:31.359 "base_bdevs_list": [ 00:07:31.359 { 00:07:31.359 "name": "pt1", 00:07:31.359 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.359 "is_configured": true, 00:07:31.359 "data_offset": 2048, 00:07:31.359 "data_size": 63488 00:07:31.359 }, 00:07:31.359 { 00:07:31.359 "name": null, 00:07:31.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.359 "is_configured": false, 00:07:31.359 "data_offset": 2048, 00:07:31.359 "data_size": 63488 00:07:31.359 } 00:07:31.359 ] 00:07:31.359 }' 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.359 11:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.928 [2024-11-27 11:45:58.073180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:31.928 [2024-11-27 11:45:58.073258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.928 [2024-11-27 11:45:58.073281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:31.928 [2024-11-27 11:45:58.073291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.928 [2024-11-27 11:45:58.073765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.928 [2024-11-27 11:45:58.073794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:31.928 [2024-11-27 11:45:58.073893] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:31.928 [2024-11-27 11:45:58.073922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:31.928 [2024-11-27 11:45:58.074050] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:31.928 [2024-11-27 11:45:58.074069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:31.928 [2024-11-27 11:45:58.074308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:31.928 [2024-11-27 11:45:58.074470] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:31.928 [2024-11-27 11:45:58.074485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:31.928 [2024-11-27 11:45:58.074632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.928 pt2 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.928 "name": "raid_bdev1", 00:07:31.928 "uuid": "a1821c7b-6bb4-4411-a0d9-eadc0b99a0c7", 00:07:31.928 "strip_size_kb": 64, 00:07:31.928 "state": "online", 00:07:31.928 "raid_level": "concat", 00:07:31.928 "superblock": true, 00:07:31.928 "num_base_bdevs": 2, 00:07:31.928 "num_base_bdevs_discovered": 2, 00:07:31.928 "num_base_bdevs_operational": 2, 00:07:31.928 "base_bdevs_list": [ 00:07:31.928 { 00:07:31.928 "name": "pt1", 00:07:31.928 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.928 "is_configured": true, 00:07:31.928 "data_offset": 2048, 00:07:31.928 "data_size": 63488 00:07:31.928 }, 00:07:31.928 { 00:07:31.928 "name": "pt2", 00:07:31.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.928 "is_configured": true, 00:07:31.928 "data_offset": 2048, 00:07:31.928 "data_size": 63488 00:07:31.928 } 00:07:31.928 ] 00:07:31.928 }' 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.928 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.188 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:32.188 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:32.188 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:32.188 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:32.188 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:32.188 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:32.188 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:32.188 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.188 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.188 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.188 [2024-11-27 11:45:58.552583] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:32.448 "name": "raid_bdev1", 00:07:32.448 "aliases": [ 00:07:32.448 "a1821c7b-6bb4-4411-a0d9-eadc0b99a0c7" 00:07:32.448 ], 00:07:32.448 "product_name": "Raid Volume", 00:07:32.448 "block_size": 512, 00:07:32.448 "num_blocks": 126976, 00:07:32.448 "uuid": "a1821c7b-6bb4-4411-a0d9-eadc0b99a0c7", 00:07:32.448 "assigned_rate_limits": { 00:07:32.448 "rw_ios_per_sec": 0, 00:07:32.448 "rw_mbytes_per_sec": 0, 00:07:32.448 "r_mbytes_per_sec": 0, 00:07:32.448 "w_mbytes_per_sec": 0 00:07:32.448 }, 00:07:32.448 "claimed": false, 00:07:32.448 "zoned": false, 00:07:32.448 "supported_io_types": { 00:07:32.448 "read": true, 00:07:32.448 "write": true, 00:07:32.448 "unmap": true, 00:07:32.448 "flush": true, 00:07:32.448 "reset": true, 00:07:32.448 "nvme_admin": false, 00:07:32.448 "nvme_io": false, 00:07:32.448 "nvme_io_md": false, 00:07:32.448 "write_zeroes": true, 00:07:32.448 "zcopy": false, 00:07:32.448 "get_zone_info": false, 00:07:32.448 "zone_management": false, 00:07:32.448 "zone_append": false, 00:07:32.448 "compare": false, 00:07:32.448 "compare_and_write": false, 00:07:32.448 "abort": false, 00:07:32.448 "seek_hole": false, 00:07:32.448 "seek_data": false, 00:07:32.448 "copy": false, 00:07:32.448 "nvme_iov_md": false 00:07:32.448 }, 00:07:32.448 "memory_domains": [ 00:07:32.448 { 00:07:32.448 "dma_device_id": "system", 00:07:32.448 "dma_device_type": 1 00:07:32.448 }, 00:07:32.448 { 00:07:32.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.448 "dma_device_type": 2 00:07:32.448 }, 00:07:32.448 { 00:07:32.448 "dma_device_id": "system", 00:07:32.448 "dma_device_type": 1 00:07:32.448 }, 00:07:32.448 { 00:07:32.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.448 "dma_device_type": 2 00:07:32.448 } 00:07:32.448 ], 00:07:32.448 "driver_specific": { 00:07:32.448 "raid": { 00:07:32.448 "uuid": "a1821c7b-6bb4-4411-a0d9-eadc0b99a0c7", 00:07:32.448 "strip_size_kb": 64, 00:07:32.448 "state": "online", 00:07:32.448 "raid_level": "concat", 00:07:32.448 "superblock": true, 00:07:32.448 "num_base_bdevs": 2, 00:07:32.448 "num_base_bdevs_discovered": 2, 00:07:32.448 "num_base_bdevs_operational": 2, 00:07:32.448 "base_bdevs_list": [ 00:07:32.448 { 00:07:32.448 "name": "pt1", 00:07:32.448 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.448 "is_configured": true, 00:07:32.448 "data_offset": 2048, 00:07:32.448 "data_size": 63488 00:07:32.448 }, 00:07:32.448 { 00:07:32.448 "name": "pt2", 00:07:32.448 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.448 "is_configured": true, 00:07:32.448 "data_offset": 2048, 00:07:32.448 "data_size": 63488 00:07:32.448 } 00:07:32.448 ] 00:07:32.448 } 00:07:32.448 } 00:07:32.448 }' 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:32.448 pt2' 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.448 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.449 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:32.449 [2024-11-27 11:45:58.776286] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.449 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.449 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a1821c7b-6bb4-4411-a0d9-eadc0b99a0c7 '!=' a1821c7b-6bb4-4411-a0d9-eadc0b99a0c7 ']' 00:07:32.449 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:32.449 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:32.449 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:32.449 11:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62157 00:07:32.449 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62157 ']' 00:07:32.449 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62157 00:07:32.449 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:32.449 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.449 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62157 00:07:32.708 killing process with pid 62157 00:07:32.708 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.708 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.708 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62157' 00:07:32.708 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62157 00:07:32.708 [2024-11-27 11:45:58.850537] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:32.708 [2024-11-27 11:45:58.850639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.708 [2024-11-27 11:45:58.850690] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.708 [2024-11-27 11:45:58.850702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:32.708 11:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62157 00:07:32.708 [2024-11-27 11:45:59.072357] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.100 11:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:34.100 00:07:34.100 real 0m4.608s 00:07:34.100 user 0m6.506s 00:07:34.100 sys 0m0.724s 00:07:34.100 11:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.100 11:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.100 ************************************ 00:07:34.100 END TEST raid_superblock_test 00:07:34.100 ************************************ 00:07:34.100 11:46:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:34.100 11:46:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:34.100 11:46:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.100 11:46:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.100 ************************************ 00:07:34.100 START TEST raid_read_error_test 00:07:34.100 ************************************ 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5d8pJu6FzU 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62369 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62369 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62369 ']' 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.100 11:46:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.100 [2024-11-27 11:46:00.405054] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:34.100 [2024-11-27 11:46:00.405176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62369 ] 00:07:34.359 [2024-11-27 11:46:00.562740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.359 [2024-11-27 11:46:00.678260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.618 [2024-11-27 11:46:00.887785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.618 [2024-11-27 11:46:00.887843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.877 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.877 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:34.877 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:34.877 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:34.877 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.877 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.137 BaseBdev1_malloc 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.137 true 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.137 [2024-11-27 11:46:01.322688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:35.137 [2024-11-27 11:46:01.322744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.137 [2024-11-27 11:46:01.322765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:35.137 [2024-11-27 11:46:01.322776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.137 [2024-11-27 11:46:01.325133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.137 [2024-11-27 11:46:01.325172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:35.137 BaseBdev1 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.137 BaseBdev2_malloc 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.137 true 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.137 [2024-11-27 11:46:01.389962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:35.137 [2024-11-27 11:46:01.390016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.137 [2024-11-27 11:46:01.390034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:35.137 [2024-11-27 11:46:01.390046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.137 [2024-11-27 11:46:01.392351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.137 [2024-11-27 11:46:01.392394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:35.137 BaseBdev2 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.137 [2024-11-27 11:46:01.402005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.137 [2024-11-27 11:46:01.404042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:35.137 [2024-11-27 11:46:01.404256] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:35.137 [2024-11-27 11:46:01.404275] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:35.137 [2024-11-27 11:46:01.404570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:35.137 [2024-11-27 11:46:01.404771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:35.137 [2024-11-27 11:46:01.404790] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:35.137 [2024-11-27 11:46:01.404989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.137 "name": "raid_bdev1", 00:07:35.137 "uuid": "23792769-fff5-4e2f-b16c-eea309c9a3eb", 00:07:35.137 "strip_size_kb": 64, 00:07:35.137 "state": "online", 00:07:35.137 "raid_level": "concat", 00:07:35.137 "superblock": true, 00:07:35.137 "num_base_bdevs": 2, 00:07:35.137 "num_base_bdevs_discovered": 2, 00:07:35.137 "num_base_bdevs_operational": 2, 00:07:35.137 "base_bdevs_list": [ 00:07:35.137 { 00:07:35.137 "name": "BaseBdev1", 00:07:35.137 "uuid": "880782d7-ca61-516f-b356-fa6d2acbba5f", 00:07:35.137 "is_configured": true, 00:07:35.137 "data_offset": 2048, 00:07:35.137 "data_size": 63488 00:07:35.137 }, 00:07:35.137 { 00:07:35.137 "name": "BaseBdev2", 00:07:35.137 "uuid": "b118a918-970f-5856-9ee1-d9ccfb107f0e", 00:07:35.137 "is_configured": true, 00:07:35.137 "data_offset": 2048, 00:07:35.137 "data_size": 63488 00:07:35.137 } 00:07:35.137 ] 00:07:35.137 }' 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.137 11:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.706 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:35.706 11:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:35.706 [2024-11-27 11:46:01.954449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.644 "name": "raid_bdev1", 00:07:36.644 "uuid": "23792769-fff5-4e2f-b16c-eea309c9a3eb", 00:07:36.644 "strip_size_kb": 64, 00:07:36.644 "state": "online", 00:07:36.644 "raid_level": "concat", 00:07:36.644 "superblock": true, 00:07:36.644 "num_base_bdevs": 2, 00:07:36.644 "num_base_bdevs_discovered": 2, 00:07:36.644 "num_base_bdevs_operational": 2, 00:07:36.644 "base_bdevs_list": [ 00:07:36.644 { 00:07:36.644 "name": "BaseBdev1", 00:07:36.644 "uuid": "880782d7-ca61-516f-b356-fa6d2acbba5f", 00:07:36.644 "is_configured": true, 00:07:36.644 "data_offset": 2048, 00:07:36.644 "data_size": 63488 00:07:36.644 }, 00:07:36.644 { 00:07:36.644 "name": "BaseBdev2", 00:07:36.644 "uuid": "b118a918-970f-5856-9ee1-d9ccfb107f0e", 00:07:36.644 "is_configured": true, 00:07:36.644 "data_offset": 2048, 00:07:36.644 "data_size": 63488 00:07:36.644 } 00:07:36.644 ] 00:07:36.644 }' 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.644 11:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.211 11:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:37.211 11:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.211 11:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.211 [2024-11-27 11:46:03.359223] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.211 [2024-11-27 11:46:03.359266] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.211 [2024-11-27 11:46:03.362210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.211 [2024-11-27 11:46:03.362260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.212 [2024-11-27 11:46:03.362293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.212 [2024-11-27 11:46:03.362305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:37.212 11:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.212 { 00:07:37.212 "results": [ 00:07:37.212 { 00:07:37.212 "job": "raid_bdev1", 00:07:37.212 "core_mask": "0x1", 00:07:37.212 "workload": "randrw", 00:07:37.212 "percentage": 50, 00:07:37.212 "status": "finished", 00:07:37.212 "queue_depth": 1, 00:07:37.212 "io_size": 131072, 00:07:37.212 "runtime": 1.405626, 00:07:37.212 "iops": 15040.985297653857, 00:07:37.212 "mibps": 1880.123162206732, 00:07:37.212 "io_failed": 1, 00:07:37.212 "io_timeout": 0, 00:07:37.212 "avg_latency_us": 91.70185026189927, 00:07:37.212 "min_latency_us": 26.494323144104804, 00:07:37.212 "max_latency_us": 1452.380786026201 00:07:37.212 } 00:07:37.212 ], 00:07:37.212 "core_count": 1 00:07:37.212 } 00:07:37.212 11:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62369 00:07:37.212 11:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62369 ']' 00:07:37.212 11:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62369 00:07:37.212 11:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:37.212 11:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.212 11:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62369 00:07:37.212 11:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.212 11:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.212 killing process with pid 62369 00:07:37.212 11:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62369' 00:07:37.212 11:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62369 00:07:37.212 [2024-11-27 11:46:03.409929] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.212 11:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62369 00:07:37.212 [2024-11-27 11:46:03.551072] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.592 11:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5d8pJu6FzU 00:07:38.592 11:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:38.592 11:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:38.592 11:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:38.592 11:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:38.592 11:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.592 11:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.592 11:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:38.592 00:07:38.592 real 0m4.477s 00:07:38.592 user 0m5.415s 00:07:38.592 sys 0m0.544s 00:07:38.592 11:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.592 11:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.592 ************************************ 00:07:38.592 END TEST raid_read_error_test 00:07:38.592 ************************************ 00:07:38.593 11:46:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:38.593 11:46:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:38.593 11:46:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.593 11:46:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.593 ************************************ 00:07:38.593 START TEST raid_write_error_test 00:07:38.593 ************************************ 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MzCJlhvlx0 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62514 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62514 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62514 ']' 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.593 11:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.593 [2024-11-27 11:46:04.944669] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:38.593 [2024-11-27 11:46:04.944803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62514 ] 00:07:38.851 [2024-11-27 11:46:05.120598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.110 [2024-11-27 11:46:05.240656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.111 [2024-11-27 11:46:05.454379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.111 [2024-11-27 11:46:05.454430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.681 BaseBdev1_malloc 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.681 true 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.681 [2024-11-27 11:46:05.869057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:39.681 [2024-11-27 11:46:05.869123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.681 [2024-11-27 11:46:05.869143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:39.681 [2024-11-27 11:46:05.869154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.681 [2024-11-27 11:46:05.871269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.681 [2024-11-27 11:46:05.871308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:39.681 BaseBdev1 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.681 BaseBdev2_malloc 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.681 true 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.681 [2024-11-27 11:46:05.935700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:39.681 [2024-11-27 11:46:05.935755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.681 [2024-11-27 11:46:05.935789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:39.681 [2024-11-27 11:46:05.935801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.681 [2024-11-27 11:46:05.938023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.681 [2024-11-27 11:46:05.938061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:39.681 BaseBdev2 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.681 [2024-11-27 11:46:05.947742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.681 [2024-11-27 11:46:05.949649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.681 [2024-11-27 11:46:05.949850] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:39.681 [2024-11-27 11:46:05.949873] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.681 [2024-11-27 11:46:05.950094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:39.681 [2024-11-27 11:46:05.950266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:39.681 [2024-11-27 11:46:05.950289] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:39.681 [2024-11-27 11:46:05.950433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.681 11:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.681 11:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.681 "name": "raid_bdev1", 00:07:39.681 "uuid": "8eed21e5-0366-4efb-99ee-223418da0074", 00:07:39.681 "strip_size_kb": 64, 00:07:39.681 "state": "online", 00:07:39.681 "raid_level": "concat", 00:07:39.681 "superblock": true, 00:07:39.681 "num_base_bdevs": 2, 00:07:39.681 "num_base_bdevs_discovered": 2, 00:07:39.681 "num_base_bdevs_operational": 2, 00:07:39.681 "base_bdevs_list": [ 00:07:39.681 { 00:07:39.681 "name": "BaseBdev1", 00:07:39.681 "uuid": "d56e786e-13ff-5c88-9cd8-5284b997a388", 00:07:39.681 "is_configured": true, 00:07:39.681 "data_offset": 2048, 00:07:39.681 "data_size": 63488 00:07:39.681 }, 00:07:39.681 { 00:07:39.681 "name": "BaseBdev2", 00:07:39.681 "uuid": "266e2e1f-f81d-529c-a66c-db872e996a04", 00:07:39.681 "is_configured": true, 00:07:39.681 "data_offset": 2048, 00:07:39.681 "data_size": 63488 00:07:39.681 } 00:07:39.681 ] 00:07:39.681 }' 00:07:39.681 11:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.681 11:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.251 11:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:40.251 11:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:40.251 [2024-11-27 11:46:06.500294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:41.188 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:41.188 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.188 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.188 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.188 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:41.188 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:41.188 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:41.188 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:41.188 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.188 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.188 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.188 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.188 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.188 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.188 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.188 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.188 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.188 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.189 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.189 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.189 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.189 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.189 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.189 "name": "raid_bdev1", 00:07:41.189 "uuid": "8eed21e5-0366-4efb-99ee-223418da0074", 00:07:41.189 "strip_size_kb": 64, 00:07:41.189 "state": "online", 00:07:41.189 "raid_level": "concat", 00:07:41.189 "superblock": true, 00:07:41.189 "num_base_bdevs": 2, 00:07:41.189 "num_base_bdevs_discovered": 2, 00:07:41.189 "num_base_bdevs_operational": 2, 00:07:41.189 "base_bdevs_list": [ 00:07:41.189 { 00:07:41.189 "name": "BaseBdev1", 00:07:41.189 "uuid": "d56e786e-13ff-5c88-9cd8-5284b997a388", 00:07:41.189 "is_configured": true, 00:07:41.189 "data_offset": 2048, 00:07:41.189 "data_size": 63488 00:07:41.189 }, 00:07:41.189 { 00:07:41.189 "name": "BaseBdev2", 00:07:41.189 "uuid": "266e2e1f-f81d-529c-a66c-db872e996a04", 00:07:41.189 "is_configured": true, 00:07:41.189 "data_offset": 2048, 00:07:41.189 "data_size": 63488 00:07:41.189 } 00:07:41.189 ] 00:07:41.189 }' 00:07:41.189 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.189 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.763 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:41.763 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.763 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.763 [2024-11-27 11:46:07.844329] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.763 [2024-11-27 11:46:07.844375] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.763 [2024-11-27 11:46:07.847055] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.763 [2024-11-27 11:46:07.847104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.763 [2024-11-27 11:46:07.847136] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.763 [2024-11-27 11:46:07.847150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:41.763 { 00:07:41.763 "results": [ 00:07:41.763 { 00:07:41.763 "job": "raid_bdev1", 00:07:41.763 "core_mask": "0x1", 00:07:41.763 "workload": "randrw", 00:07:41.763 "percentage": 50, 00:07:41.763 "status": "finished", 00:07:41.763 "queue_depth": 1, 00:07:41.763 "io_size": 131072, 00:07:41.763 "runtime": 1.344686, 00:07:41.763 "iops": 15179.752001582525, 00:07:41.763 "mibps": 1897.4690001978156, 00:07:41.763 "io_failed": 1, 00:07:41.763 "io_timeout": 0, 00:07:41.763 "avg_latency_us": 90.86354414527774, 00:07:41.763 "min_latency_us": 25.7117903930131, 00:07:41.763 "max_latency_us": 1516.7720524017468 00:07:41.763 } 00:07:41.763 ], 00:07:41.763 "core_count": 1 00:07:41.763 } 00:07:41.763 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.763 11:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62514 00:07:41.763 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62514 ']' 00:07:41.763 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62514 00:07:41.763 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:41.763 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.763 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62514 00:07:41.763 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.763 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.763 killing process with pid 62514 00:07:41.763 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62514' 00:07:41.763 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62514 00:07:41.763 [2024-11-27 11:46:07.886861] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.763 11:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62514 00:07:41.763 [2024-11-27 11:46:08.031739] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.142 11:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MzCJlhvlx0 00:07:43.142 11:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:43.142 11:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:43.142 11:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:43.142 11:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:43.142 11:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.142 11:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:43.142 11:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:43.142 00:07:43.142 real 0m4.453s 00:07:43.142 user 0m5.333s 00:07:43.142 sys 0m0.559s 00:07:43.142 11:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.142 11:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.142 ************************************ 00:07:43.142 END TEST raid_write_error_test 00:07:43.142 ************************************ 00:07:43.142 11:46:09 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:43.142 11:46:09 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:43.142 11:46:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:43.142 11:46:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.142 11:46:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.142 ************************************ 00:07:43.142 START TEST raid_state_function_test 00:07:43.142 ************************************ 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62658 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62658' 00:07:43.142 Process raid pid: 62658 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62658 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62658 ']' 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.142 11:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.142 [2024-11-27 11:46:09.455444] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:43.142 [2024-11-27 11:46:09.455594] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.401 [2024-11-27 11:46:09.611883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.401 [2024-11-27 11:46:09.729834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.661 [2024-11-27 11:46:09.942064] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.661 [2024-11-27 11:46:09.942113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.229 [2024-11-27 11:46:10.325742] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.229 [2024-11-27 11:46:10.325814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.229 [2024-11-27 11:46:10.325829] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.229 [2024-11-27 11:46:10.325855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.229 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.230 "name": "Existed_Raid", 00:07:44.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.230 "strip_size_kb": 0, 00:07:44.230 "state": "configuring", 00:07:44.230 "raid_level": "raid1", 00:07:44.230 "superblock": false, 00:07:44.230 "num_base_bdevs": 2, 00:07:44.230 "num_base_bdevs_discovered": 0, 00:07:44.230 "num_base_bdevs_operational": 2, 00:07:44.230 "base_bdevs_list": [ 00:07:44.230 { 00:07:44.230 "name": "BaseBdev1", 00:07:44.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.230 "is_configured": false, 00:07:44.230 "data_offset": 0, 00:07:44.230 "data_size": 0 00:07:44.230 }, 00:07:44.230 { 00:07:44.230 "name": "BaseBdev2", 00:07:44.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.230 "is_configured": false, 00:07:44.230 "data_offset": 0, 00:07:44.230 "data_size": 0 00:07:44.230 } 00:07:44.230 ] 00:07:44.230 }' 00:07:44.230 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.230 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.489 [2024-11-27 11:46:10.689056] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.489 [2024-11-27 11:46:10.689110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.489 [2024-11-27 11:46:10.701067] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.489 [2024-11-27 11:46:10.701117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.489 [2024-11-27 11:46:10.701127] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.489 [2024-11-27 11:46:10.701139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.489 [2024-11-27 11:46:10.753291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.489 BaseBdev1 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.489 [ 00:07:44.489 { 00:07:44.489 "name": "BaseBdev1", 00:07:44.489 "aliases": [ 00:07:44.489 "ed510cf7-cabb-47d6-bc38-38c1b241e31c" 00:07:44.489 ], 00:07:44.489 "product_name": "Malloc disk", 00:07:44.489 "block_size": 512, 00:07:44.489 "num_blocks": 65536, 00:07:44.489 "uuid": "ed510cf7-cabb-47d6-bc38-38c1b241e31c", 00:07:44.489 "assigned_rate_limits": { 00:07:44.489 "rw_ios_per_sec": 0, 00:07:44.489 "rw_mbytes_per_sec": 0, 00:07:44.489 "r_mbytes_per_sec": 0, 00:07:44.489 "w_mbytes_per_sec": 0 00:07:44.489 }, 00:07:44.489 "claimed": true, 00:07:44.489 "claim_type": "exclusive_write", 00:07:44.489 "zoned": false, 00:07:44.489 "supported_io_types": { 00:07:44.489 "read": true, 00:07:44.489 "write": true, 00:07:44.489 "unmap": true, 00:07:44.489 "flush": true, 00:07:44.489 "reset": true, 00:07:44.489 "nvme_admin": false, 00:07:44.489 "nvme_io": false, 00:07:44.489 "nvme_io_md": false, 00:07:44.489 "write_zeroes": true, 00:07:44.489 "zcopy": true, 00:07:44.489 "get_zone_info": false, 00:07:44.489 "zone_management": false, 00:07:44.489 "zone_append": false, 00:07:44.489 "compare": false, 00:07:44.489 "compare_and_write": false, 00:07:44.489 "abort": true, 00:07:44.489 "seek_hole": false, 00:07:44.489 "seek_data": false, 00:07:44.489 "copy": true, 00:07:44.489 "nvme_iov_md": false 00:07:44.489 }, 00:07:44.489 "memory_domains": [ 00:07:44.489 { 00:07:44.489 "dma_device_id": "system", 00:07:44.489 "dma_device_type": 1 00:07:44.489 }, 00:07:44.489 { 00:07:44.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.489 "dma_device_type": 2 00:07:44.489 } 00:07:44.489 ], 00:07:44.489 "driver_specific": {} 00:07:44.489 } 00:07:44.489 ] 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.489 "name": "Existed_Raid", 00:07:44.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.489 "strip_size_kb": 0, 00:07:44.489 "state": "configuring", 00:07:44.489 "raid_level": "raid1", 00:07:44.489 "superblock": false, 00:07:44.489 "num_base_bdevs": 2, 00:07:44.489 "num_base_bdevs_discovered": 1, 00:07:44.489 "num_base_bdevs_operational": 2, 00:07:44.489 "base_bdevs_list": [ 00:07:44.489 { 00:07:44.489 "name": "BaseBdev1", 00:07:44.489 "uuid": "ed510cf7-cabb-47d6-bc38-38c1b241e31c", 00:07:44.489 "is_configured": true, 00:07:44.489 "data_offset": 0, 00:07:44.489 "data_size": 65536 00:07:44.489 }, 00:07:44.489 { 00:07:44.489 "name": "BaseBdev2", 00:07:44.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.489 "is_configured": false, 00:07:44.489 "data_offset": 0, 00:07:44.489 "data_size": 0 00:07:44.489 } 00:07:44.489 ] 00:07:44.489 }' 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.489 11:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.055 [2024-11-27 11:46:11.252532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:45.055 [2024-11-27 11:46:11.252606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.055 [2024-11-27 11:46:11.264554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:45.055 [2024-11-27 11:46:11.266485] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.055 [2024-11-27 11:46:11.266535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.055 "name": "Existed_Raid", 00:07:45.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.055 "strip_size_kb": 0, 00:07:45.055 "state": "configuring", 00:07:45.055 "raid_level": "raid1", 00:07:45.055 "superblock": false, 00:07:45.055 "num_base_bdevs": 2, 00:07:45.055 "num_base_bdevs_discovered": 1, 00:07:45.055 "num_base_bdevs_operational": 2, 00:07:45.055 "base_bdevs_list": [ 00:07:45.055 { 00:07:45.055 "name": "BaseBdev1", 00:07:45.055 "uuid": "ed510cf7-cabb-47d6-bc38-38c1b241e31c", 00:07:45.055 "is_configured": true, 00:07:45.055 "data_offset": 0, 00:07:45.055 "data_size": 65536 00:07:45.055 }, 00:07:45.055 { 00:07:45.055 "name": "BaseBdev2", 00:07:45.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.055 "is_configured": false, 00:07:45.055 "data_offset": 0, 00:07:45.055 "data_size": 0 00:07:45.055 } 00:07:45.055 ] 00:07:45.055 }' 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.055 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.623 [2024-11-27 11:46:11.769389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:45.623 [2024-11-27 11:46:11.769468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:45.623 [2024-11-27 11:46:11.769478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:45.623 [2024-11-27 11:46:11.769761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:45.623 [2024-11-27 11:46:11.770018] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:45.623 [2024-11-27 11:46:11.770040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:45.623 [2024-11-27 11:46:11.770388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.623 BaseBdev2 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.623 [ 00:07:45.623 { 00:07:45.623 "name": "BaseBdev2", 00:07:45.623 "aliases": [ 00:07:45.623 "c23c8a64-7902-41f0-94c6-f3d7e319ac73" 00:07:45.623 ], 00:07:45.623 "product_name": "Malloc disk", 00:07:45.623 "block_size": 512, 00:07:45.623 "num_blocks": 65536, 00:07:45.623 "uuid": "c23c8a64-7902-41f0-94c6-f3d7e319ac73", 00:07:45.623 "assigned_rate_limits": { 00:07:45.623 "rw_ios_per_sec": 0, 00:07:45.623 "rw_mbytes_per_sec": 0, 00:07:45.623 "r_mbytes_per_sec": 0, 00:07:45.623 "w_mbytes_per_sec": 0 00:07:45.623 }, 00:07:45.623 "claimed": true, 00:07:45.623 "claim_type": "exclusive_write", 00:07:45.623 "zoned": false, 00:07:45.623 "supported_io_types": { 00:07:45.623 "read": true, 00:07:45.623 "write": true, 00:07:45.623 "unmap": true, 00:07:45.623 "flush": true, 00:07:45.623 "reset": true, 00:07:45.623 "nvme_admin": false, 00:07:45.623 "nvme_io": false, 00:07:45.623 "nvme_io_md": false, 00:07:45.623 "write_zeroes": true, 00:07:45.623 "zcopy": true, 00:07:45.623 "get_zone_info": false, 00:07:45.623 "zone_management": false, 00:07:45.623 "zone_append": false, 00:07:45.623 "compare": false, 00:07:45.623 "compare_and_write": false, 00:07:45.623 "abort": true, 00:07:45.623 "seek_hole": false, 00:07:45.623 "seek_data": false, 00:07:45.623 "copy": true, 00:07:45.623 "nvme_iov_md": false 00:07:45.623 }, 00:07:45.623 "memory_domains": [ 00:07:45.623 { 00:07:45.623 "dma_device_id": "system", 00:07:45.623 "dma_device_type": 1 00:07:45.623 }, 00:07:45.623 { 00:07:45.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.623 "dma_device_type": 2 00:07:45.623 } 00:07:45.623 ], 00:07:45.623 "driver_specific": {} 00:07:45.623 } 00:07:45.623 ] 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.623 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.623 "name": "Existed_Raid", 00:07:45.623 "uuid": "71b9e435-e530-45c4-87f6-9e865cc1475c", 00:07:45.623 "strip_size_kb": 0, 00:07:45.623 "state": "online", 00:07:45.623 "raid_level": "raid1", 00:07:45.623 "superblock": false, 00:07:45.624 "num_base_bdevs": 2, 00:07:45.624 "num_base_bdevs_discovered": 2, 00:07:45.624 "num_base_bdevs_operational": 2, 00:07:45.624 "base_bdevs_list": [ 00:07:45.624 { 00:07:45.624 "name": "BaseBdev1", 00:07:45.624 "uuid": "ed510cf7-cabb-47d6-bc38-38c1b241e31c", 00:07:45.624 "is_configured": true, 00:07:45.624 "data_offset": 0, 00:07:45.624 "data_size": 65536 00:07:45.624 }, 00:07:45.624 { 00:07:45.624 "name": "BaseBdev2", 00:07:45.624 "uuid": "c23c8a64-7902-41f0-94c6-f3d7e319ac73", 00:07:45.624 "is_configured": true, 00:07:45.624 "data_offset": 0, 00:07:45.624 "data_size": 65536 00:07:45.624 } 00:07:45.624 ] 00:07:45.624 }' 00:07:45.624 11:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.624 11:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.883 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:45.883 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:45.883 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:45.883 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:45.883 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:45.883 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:45.883 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:45.883 11:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.883 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:45.883 11:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.883 [2024-11-27 11:46:12.261081] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.141 11:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.141 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:46.141 "name": "Existed_Raid", 00:07:46.141 "aliases": [ 00:07:46.141 "71b9e435-e530-45c4-87f6-9e865cc1475c" 00:07:46.141 ], 00:07:46.141 "product_name": "Raid Volume", 00:07:46.141 "block_size": 512, 00:07:46.141 "num_blocks": 65536, 00:07:46.141 "uuid": "71b9e435-e530-45c4-87f6-9e865cc1475c", 00:07:46.141 "assigned_rate_limits": { 00:07:46.141 "rw_ios_per_sec": 0, 00:07:46.141 "rw_mbytes_per_sec": 0, 00:07:46.141 "r_mbytes_per_sec": 0, 00:07:46.141 "w_mbytes_per_sec": 0 00:07:46.141 }, 00:07:46.141 "claimed": false, 00:07:46.141 "zoned": false, 00:07:46.141 "supported_io_types": { 00:07:46.141 "read": true, 00:07:46.141 "write": true, 00:07:46.141 "unmap": false, 00:07:46.141 "flush": false, 00:07:46.141 "reset": true, 00:07:46.141 "nvme_admin": false, 00:07:46.141 "nvme_io": false, 00:07:46.141 "nvme_io_md": false, 00:07:46.141 "write_zeroes": true, 00:07:46.141 "zcopy": false, 00:07:46.141 "get_zone_info": false, 00:07:46.141 "zone_management": false, 00:07:46.141 "zone_append": false, 00:07:46.141 "compare": false, 00:07:46.141 "compare_and_write": false, 00:07:46.141 "abort": false, 00:07:46.141 "seek_hole": false, 00:07:46.141 "seek_data": false, 00:07:46.141 "copy": false, 00:07:46.141 "nvme_iov_md": false 00:07:46.141 }, 00:07:46.141 "memory_domains": [ 00:07:46.141 { 00:07:46.141 "dma_device_id": "system", 00:07:46.141 "dma_device_type": 1 00:07:46.141 }, 00:07:46.141 { 00:07:46.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.141 "dma_device_type": 2 00:07:46.141 }, 00:07:46.141 { 00:07:46.141 "dma_device_id": "system", 00:07:46.141 "dma_device_type": 1 00:07:46.141 }, 00:07:46.141 { 00:07:46.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.141 "dma_device_type": 2 00:07:46.141 } 00:07:46.141 ], 00:07:46.141 "driver_specific": { 00:07:46.141 "raid": { 00:07:46.141 "uuid": "71b9e435-e530-45c4-87f6-9e865cc1475c", 00:07:46.141 "strip_size_kb": 0, 00:07:46.141 "state": "online", 00:07:46.141 "raid_level": "raid1", 00:07:46.141 "superblock": false, 00:07:46.141 "num_base_bdevs": 2, 00:07:46.141 "num_base_bdevs_discovered": 2, 00:07:46.141 "num_base_bdevs_operational": 2, 00:07:46.141 "base_bdevs_list": [ 00:07:46.141 { 00:07:46.141 "name": "BaseBdev1", 00:07:46.141 "uuid": "ed510cf7-cabb-47d6-bc38-38c1b241e31c", 00:07:46.141 "is_configured": true, 00:07:46.141 "data_offset": 0, 00:07:46.141 "data_size": 65536 00:07:46.141 }, 00:07:46.141 { 00:07:46.141 "name": "BaseBdev2", 00:07:46.141 "uuid": "c23c8a64-7902-41f0-94c6-f3d7e319ac73", 00:07:46.141 "is_configured": true, 00:07:46.141 "data_offset": 0, 00:07:46.141 "data_size": 65536 00:07:46.141 } 00:07:46.141 ] 00:07:46.141 } 00:07:46.141 } 00:07:46.141 }' 00:07:46.141 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.141 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:46.141 BaseBdev2' 00:07:46.141 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.141 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:46.141 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.141 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:46.141 11:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.141 11:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.141 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.141 11:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.141 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.142 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.142 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.142 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:46.142 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.142 11:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.142 11:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.142 11:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.142 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.142 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.142 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:46.142 11:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.142 11:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.142 [2024-11-27 11:46:12.464388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.400 "name": "Existed_Raid", 00:07:46.400 "uuid": "71b9e435-e530-45c4-87f6-9e865cc1475c", 00:07:46.400 "strip_size_kb": 0, 00:07:46.400 "state": "online", 00:07:46.400 "raid_level": "raid1", 00:07:46.400 "superblock": false, 00:07:46.400 "num_base_bdevs": 2, 00:07:46.400 "num_base_bdevs_discovered": 1, 00:07:46.400 "num_base_bdevs_operational": 1, 00:07:46.400 "base_bdevs_list": [ 00:07:46.400 { 00:07:46.400 "name": null, 00:07:46.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.400 "is_configured": false, 00:07:46.400 "data_offset": 0, 00:07:46.400 "data_size": 65536 00:07:46.400 }, 00:07:46.400 { 00:07:46.400 "name": "BaseBdev2", 00:07:46.400 "uuid": "c23c8a64-7902-41f0-94c6-f3d7e319ac73", 00:07:46.400 "is_configured": true, 00:07:46.400 "data_offset": 0, 00:07:46.400 "data_size": 65536 00:07:46.400 } 00:07:46.400 ] 00:07:46.400 }' 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.400 11:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.658 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:46.658 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:46.658 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.658 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:46.658 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.658 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.916 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.916 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:46.916 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:46.916 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:46.916 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.916 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.916 [2024-11-27 11:46:13.078015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:46.916 [2024-11-27 11:46:13.078188] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.916 [2024-11-27 11:46:13.181137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.916 [2024-11-27 11:46:13.181300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.916 [2024-11-27 11:46:13.181353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:46.916 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.916 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:46.916 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62658 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62658 ']' 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62658 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62658 00:07:46.917 killing process with pid 62658 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62658' 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62658 00:07:46.917 [2024-11-27 11:46:13.284701] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.917 11:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62658 00:07:47.174 [2024-11-27 11:46:13.305085] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.110 ************************************ 00:07:48.110 END TEST raid_state_function_test 00:07:48.110 ************************************ 00:07:48.110 11:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:48.110 00:07:48.110 real 0m5.097s 00:07:48.110 user 0m7.334s 00:07:48.110 sys 0m0.814s 00:07:48.110 11:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.110 11:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.369 11:46:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:48.369 11:46:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:48.369 11:46:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.369 11:46:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.369 ************************************ 00:07:48.369 START TEST raid_state_function_test_sb 00:07:48.369 ************************************ 00:07:48.369 11:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:48.369 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:48.369 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:48.369 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62911 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62911' 00:07:48.370 Process raid pid: 62911 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62911 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62911 ']' 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.370 11:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.370 [2024-11-27 11:46:14.610017] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:48.370 [2024-11-27 11:46:14.610224] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.630 [2024-11-27 11:46:14.768395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.630 [2024-11-27 11:46:14.884633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.890 [2024-11-27 11:46:15.102398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.890 [2024-11-27 11:46:15.102528] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.151 [2024-11-27 11:46:15.463601] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:49.151 [2024-11-27 11:46:15.463710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:49.151 [2024-11-27 11:46:15.463765] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.151 [2024-11-27 11:46:15.463803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.151 "name": "Existed_Raid", 00:07:49.151 "uuid": "64521536-bc71-4537-81e9-810cfabf6e1d", 00:07:49.151 "strip_size_kb": 0, 00:07:49.151 "state": "configuring", 00:07:49.151 "raid_level": "raid1", 00:07:49.151 "superblock": true, 00:07:49.151 "num_base_bdevs": 2, 00:07:49.151 "num_base_bdevs_discovered": 0, 00:07:49.151 "num_base_bdevs_operational": 2, 00:07:49.151 "base_bdevs_list": [ 00:07:49.151 { 00:07:49.151 "name": "BaseBdev1", 00:07:49.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.151 "is_configured": false, 00:07:49.151 "data_offset": 0, 00:07:49.151 "data_size": 0 00:07:49.151 }, 00:07:49.151 { 00:07:49.151 "name": "BaseBdev2", 00:07:49.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.151 "is_configured": false, 00:07:49.151 "data_offset": 0, 00:07:49.151 "data_size": 0 00:07:49.151 } 00:07:49.151 ] 00:07:49.151 }' 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.151 11:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.760 11:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:49.760 11:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.760 11:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.760 [2024-11-27 11:46:15.950705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:49.760 [2024-11-27 11:46:15.950794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:49.760 11:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.760 11:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:49.760 11:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.760 11:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.760 [2024-11-27 11:46:15.962674] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:49.760 [2024-11-27 11:46:15.962761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:49.760 [2024-11-27 11:46:15.962797] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.760 [2024-11-27 11:46:15.962827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.760 11:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.760 11:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:49.760 11:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.760 11:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.760 [2024-11-27 11:46:16.015550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.760 BaseBdev1 00:07:49.760 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.760 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:49.760 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:49.760 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:49.760 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:49.760 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:49.760 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:49.760 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:49.760 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.760 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.760 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.760 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.761 [ 00:07:49.761 { 00:07:49.761 "name": "BaseBdev1", 00:07:49.761 "aliases": [ 00:07:49.761 "514b7e95-4349-45e6-8fcc-3db52597f7fd" 00:07:49.761 ], 00:07:49.761 "product_name": "Malloc disk", 00:07:49.761 "block_size": 512, 00:07:49.761 "num_blocks": 65536, 00:07:49.761 "uuid": "514b7e95-4349-45e6-8fcc-3db52597f7fd", 00:07:49.761 "assigned_rate_limits": { 00:07:49.761 "rw_ios_per_sec": 0, 00:07:49.761 "rw_mbytes_per_sec": 0, 00:07:49.761 "r_mbytes_per_sec": 0, 00:07:49.761 "w_mbytes_per_sec": 0 00:07:49.761 }, 00:07:49.761 "claimed": true, 00:07:49.761 "claim_type": "exclusive_write", 00:07:49.761 "zoned": false, 00:07:49.761 "supported_io_types": { 00:07:49.761 "read": true, 00:07:49.761 "write": true, 00:07:49.761 "unmap": true, 00:07:49.761 "flush": true, 00:07:49.761 "reset": true, 00:07:49.761 "nvme_admin": false, 00:07:49.761 "nvme_io": false, 00:07:49.761 "nvme_io_md": false, 00:07:49.761 "write_zeroes": true, 00:07:49.761 "zcopy": true, 00:07:49.761 "get_zone_info": false, 00:07:49.761 "zone_management": false, 00:07:49.761 "zone_append": false, 00:07:49.761 "compare": false, 00:07:49.761 "compare_and_write": false, 00:07:49.761 "abort": true, 00:07:49.761 "seek_hole": false, 00:07:49.761 "seek_data": false, 00:07:49.761 "copy": true, 00:07:49.761 "nvme_iov_md": false 00:07:49.761 }, 00:07:49.761 "memory_domains": [ 00:07:49.761 { 00:07:49.761 "dma_device_id": "system", 00:07:49.761 "dma_device_type": 1 00:07:49.761 }, 00:07:49.761 { 00:07:49.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.761 "dma_device_type": 2 00:07:49.761 } 00:07:49.761 ], 00:07:49.761 "driver_specific": {} 00:07:49.761 } 00:07:49.761 ] 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.761 "name": "Existed_Raid", 00:07:49.761 "uuid": "693c8710-90b7-4e8b-b479-752df1f9337e", 00:07:49.761 "strip_size_kb": 0, 00:07:49.761 "state": "configuring", 00:07:49.761 "raid_level": "raid1", 00:07:49.761 "superblock": true, 00:07:49.761 "num_base_bdevs": 2, 00:07:49.761 "num_base_bdevs_discovered": 1, 00:07:49.761 "num_base_bdevs_operational": 2, 00:07:49.761 "base_bdevs_list": [ 00:07:49.761 { 00:07:49.761 "name": "BaseBdev1", 00:07:49.761 "uuid": "514b7e95-4349-45e6-8fcc-3db52597f7fd", 00:07:49.761 "is_configured": true, 00:07:49.761 "data_offset": 2048, 00:07:49.761 "data_size": 63488 00:07:49.761 }, 00:07:49.761 { 00:07:49.761 "name": "BaseBdev2", 00:07:49.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.761 "is_configured": false, 00:07:49.761 "data_offset": 0, 00:07:49.761 "data_size": 0 00:07:49.761 } 00:07:49.761 ] 00:07:49.761 }' 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.761 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.330 [2024-11-27 11:46:16.554673] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.330 [2024-11-27 11:46:16.554776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.330 [2024-11-27 11:46:16.566686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.330 [2024-11-27 11:46:16.568515] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.330 [2024-11-27 11:46:16.568593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.330 "name": "Existed_Raid", 00:07:50.330 "uuid": "c231c04e-bf8c-4d7d-abb5-cadf608fbf0c", 00:07:50.330 "strip_size_kb": 0, 00:07:50.330 "state": "configuring", 00:07:50.330 "raid_level": "raid1", 00:07:50.330 "superblock": true, 00:07:50.330 "num_base_bdevs": 2, 00:07:50.330 "num_base_bdevs_discovered": 1, 00:07:50.330 "num_base_bdevs_operational": 2, 00:07:50.330 "base_bdevs_list": [ 00:07:50.330 { 00:07:50.330 "name": "BaseBdev1", 00:07:50.330 "uuid": "514b7e95-4349-45e6-8fcc-3db52597f7fd", 00:07:50.330 "is_configured": true, 00:07:50.330 "data_offset": 2048, 00:07:50.330 "data_size": 63488 00:07:50.330 }, 00:07:50.330 { 00:07:50.330 "name": "BaseBdev2", 00:07:50.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.330 "is_configured": false, 00:07:50.330 "data_offset": 0, 00:07:50.330 "data_size": 0 00:07:50.330 } 00:07:50.330 ] 00:07:50.330 }' 00:07:50.330 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.331 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.590 11:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:50.590 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.590 11:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.851 [2024-11-27 11:46:17.008147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:50.851 [2024-11-27 11:46:17.008524] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:50.851 [2024-11-27 11:46:17.008582] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:50.851 [2024-11-27 11:46:17.008889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:50.851 BaseBdev2 00:07:50.851 [2024-11-27 11:46:17.009106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:50.851 [2024-11-27 11:46:17.009126] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:50.851 [2024-11-27 11:46:17.009309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.851 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.851 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:50.851 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:50.851 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:50.851 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:50.851 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:50.851 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:50.851 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:50.851 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.851 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.851 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.851 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:50.851 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.851 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.851 [ 00:07:50.851 { 00:07:50.851 "name": "BaseBdev2", 00:07:50.851 "aliases": [ 00:07:50.851 "834190ef-4658-4963-9923-f3ce7ae210e3" 00:07:50.851 ], 00:07:50.851 "product_name": "Malloc disk", 00:07:50.851 "block_size": 512, 00:07:50.851 "num_blocks": 65536, 00:07:50.851 "uuid": "834190ef-4658-4963-9923-f3ce7ae210e3", 00:07:50.851 "assigned_rate_limits": { 00:07:50.851 "rw_ios_per_sec": 0, 00:07:50.851 "rw_mbytes_per_sec": 0, 00:07:50.851 "r_mbytes_per_sec": 0, 00:07:50.851 "w_mbytes_per_sec": 0 00:07:50.851 }, 00:07:50.851 "claimed": true, 00:07:50.851 "claim_type": "exclusive_write", 00:07:50.851 "zoned": false, 00:07:50.851 "supported_io_types": { 00:07:50.851 "read": true, 00:07:50.851 "write": true, 00:07:50.851 "unmap": true, 00:07:50.851 "flush": true, 00:07:50.851 "reset": true, 00:07:50.851 "nvme_admin": false, 00:07:50.851 "nvme_io": false, 00:07:50.851 "nvme_io_md": false, 00:07:50.851 "write_zeroes": true, 00:07:50.851 "zcopy": true, 00:07:50.851 "get_zone_info": false, 00:07:50.851 "zone_management": false, 00:07:50.851 "zone_append": false, 00:07:50.851 "compare": false, 00:07:50.851 "compare_and_write": false, 00:07:50.851 "abort": true, 00:07:50.851 "seek_hole": false, 00:07:50.851 "seek_data": false, 00:07:50.851 "copy": true, 00:07:50.851 "nvme_iov_md": false 00:07:50.851 }, 00:07:50.851 "memory_domains": [ 00:07:50.851 { 00:07:50.851 "dma_device_id": "system", 00:07:50.851 "dma_device_type": 1 00:07:50.851 }, 00:07:50.851 { 00:07:50.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.851 "dma_device_type": 2 00:07:50.851 } 00:07:50.851 ], 00:07:50.851 "driver_specific": {} 00:07:50.851 } 00:07:50.851 ] 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.852 "name": "Existed_Raid", 00:07:50.852 "uuid": "c231c04e-bf8c-4d7d-abb5-cadf608fbf0c", 00:07:50.852 "strip_size_kb": 0, 00:07:50.852 "state": "online", 00:07:50.852 "raid_level": "raid1", 00:07:50.852 "superblock": true, 00:07:50.852 "num_base_bdevs": 2, 00:07:50.852 "num_base_bdevs_discovered": 2, 00:07:50.852 "num_base_bdevs_operational": 2, 00:07:50.852 "base_bdevs_list": [ 00:07:50.852 { 00:07:50.852 "name": "BaseBdev1", 00:07:50.852 "uuid": "514b7e95-4349-45e6-8fcc-3db52597f7fd", 00:07:50.852 "is_configured": true, 00:07:50.852 "data_offset": 2048, 00:07:50.852 "data_size": 63488 00:07:50.852 }, 00:07:50.852 { 00:07:50.852 "name": "BaseBdev2", 00:07:50.852 "uuid": "834190ef-4658-4963-9923-f3ce7ae210e3", 00:07:50.852 "is_configured": true, 00:07:50.852 "data_offset": 2048, 00:07:50.852 "data_size": 63488 00:07:50.852 } 00:07:50.852 ] 00:07:50.852 }' 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.852 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.112 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:51.112 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.372 [2024-11-27 11:46:17.507665] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:51.372 "name": "Existed_Raid", 00:07:51.372 "aliases": [ 00:07:51.372 "c231c04e-bf8c-4d7d-abb5-cadf608fbf0c" 00:07:51.372 ], 00:07:51.372 "product_name": "Raid Volume", 00:07:51.372 "block_size": 512, 00:07:51.372 "num_blocks": 63488, 00:07:51.372 "uuid": "c231c04e-bf8c-4d7d-abb5-cadf608fbf0c", 00:07:51.372 "assigned_rate_limits": { 00:07:51.372 "rw_ios_per_sec": 0, 00:07:51.372 "rw_mbytes_per_sec": 0, 00:07:51.372 "r_mbytes_per_sec": 0, 00:07:51.372 "w_mbytes_per_sec": 0 00:07:51.372 }, 00:07:51.372 "claimed": false, 00:07:51.372 "zoned": false, 00:07:51.372 "supported_io_types": { 00:07:51.372 "read": true, 00:07:51.372 "write": true, 00:07:51.372 "unmap": false, 00:07:51.372 "flush": false, 00:07:51.372 "reset": true, 00:07:51.372 "nvme_admin": false, 00:07:51.372 "nvme_io": false, 00:07:51.372 "nvme_io_md": false, 00:07:51.372 "write_zeroes": true, 00:07:51.372 "zcopy": false, 00:07:51.372 "get_zone_info": false, 00:07:51.372 "zone_management": false, 00:07:51.372 "zone_append": false, 00:07:51.372 "compare": false, 00:07:51.372 "compare_and_write": false, 00:07:51.372 "abort": false, 00:07:51.372 "seek_hole": false, 00:07:51.372 "seek_data": false, 00:07:51.372 "copy": false, 00:07:51.372 "nvme_iov_md": false 00:07:51.372 }, 00:07:51.372 "memory_domains": [ 00:07:51.372 { 00:07:51.372 "dma_device_id": "system", 00:07:51.372 "dma_device_type": 1 00:07:51.372 }, 00:07:51.372 { 00:07:51.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.372 "dma_device_type": 2 00:07:51.372 }, 00:07:51.372 { 00:07:51.372 "dma_device_id": "system", 00:07:51.372 "dma_device_type": 1 00:07:51.372 }, 00:07:51.372 { 00:07:51.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.372 "dma_device_type": 2 00:07:51.372 } 00:07:51.372 ], 00:07:51.372 "driver_specific": { 00:07:51.372 "raid": { 00:07:51.372 "uuid": "c231c04e-bf8c-4d7d-abb5-cadf608fbf0c", 00:07:51.372 "strip_size_kb": 0, 00:07:51.372 "state": "online", 00:07:51.372 "raid_level": "raid1", 00:07:51.372 "superblock": true, 00:07:51.372 "num_base_bdevs": 2, 00:07:51.372 "num_base_bdevs_discovered": 2, 00:07:51.372 "num_base_bdevs_operational": 2, 00:07:51.372 "base_bdevs_list": [ 00:07:51.372 { 00:07:51.372 "name": "BaseBdev1", 00:07:51.372 "uuid": "514b7e95-4349-45e6-8fcc-3db52597f7fd", 00:07:51.372 "is_configured": true, 00:07:51.372 "data_offset": 2048, 00:07:51.372 "data_size": 63488 00:07:51.372 }, 00:07:51.372 { 00:07:51.372 "name": "BaseBdev2", 00:07:51.372 "uuid": "834190ef-4658-4963-9923-f3ce7ae210e3", 00:07:51.372 "is_configured": true, 00:07:51.372 "data_offset": 2048, 00:07:51.372 "data_size": 63488 00:07:51.372 } 00:07:51.372 ] 00:07:51.372 } 00:07:51.372 } 00:07:51.372 }' 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:51.372 BaseBdev2' 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.372 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.373 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.373 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:51.373 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.373 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.373 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.373 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.373 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.373 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.373 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:51.373 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.373 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.373 [2024-11-27 11:46:17.751041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.633 "name": "Existed_Raid", 00:07:51.633 "uuid": "c231c04e-bf8c-4d7d-abb5-cadf608fbf0c", 00:07:51.633 "strip_size_kb": 0, 00:07:51.633 "state": "online", 00:07:51.633 "raid_level": "raid1", 00:07:51.633 "superblock": true, 00:07:51.633 "num_base_bdevs": 2, 00:07:51.633 "num_base_bdevs_discovered": 1, 00:07:51.633 "num_base_bdevs_operational": 1, 00:07:51.633 "base_bdevs_list": [ 00:07:51.633 { 00:07:51.633 "name": null, 00:07:51.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.633 "is_configured": false, 00:07:51.633 "data_offset": 0, 00:07:51.633 "data_size": 63488 00:07:51.633 }, 00:07:51.633 { 00:07:51.633 "name": "BaseBdev2", 00:07:51.633 "uuid": "834190ef-4658-4963-9923-f3ce7ae210e3", 00:07:51.633 "is_configured": true, 00:07:51.633 "data_offset": 2048, 00:07:51.633 "data_size": 63488 00:07:51.633 } 00:07:51.633 ] 00:07:51.633 }' 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.633 11:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.203 [2024-11-27 11:46:18.354982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:52.203 [2024-11-27 11:46:18.355180] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.203 [2024-11-27 11:46:18.459983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.203 [2024-11-27 11:46:18.460038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.203 [2024-11-27 11:46:18.460051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62911 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62911 ']' 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62911 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62911 00:07:52.203 killing process with pid 62911 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62911' 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62911 00:07:52.203 [2024-11-27 11:46:18.545118] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.203 11:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62911 00:07:52.203 [2024-11-27 11:46:18.563803] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.582 11:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:53.582 00:07:53.582 real 0m5.224s 00:07:53.582 user 0m7.551s 00:07:53.582 sys 0m0.828s 00:07:53.582 ************************************ 00:07:53.582 END TEST raid_state_function_test_sb 00:07:53.582 ************************************ 00:07:53.582 11:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.582 11:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.582 11:46:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:53.582 11:46:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:53.582 11:46:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.582 11:46:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:53.582 ************************************ 00:07:53.582 START TEST raid_superblock_test 00:07:53.582 ************************************ 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:53.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63163 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63163 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63163 ']' 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.582 11:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.582 [2024-11-27 11:46:19.889512] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:53.582 [2024-11-27 11:46:19.889716] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63163 ] 00:07:53.842 [2024-11-27 11:46:20.046633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.842 [2024-11-27 11:46:20.169587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.101 [2024-11-27 11:46:20.381552] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.101 [2024-11-27 11:46:20.381617] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.674 malloc1 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.674 [2024-11-27 11:46:20.810675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:54.674 [2024-11-27 11:46:20.810825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.674 [2024-11-27 11:46:20.810892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:54.674 [2024-11-27 11:46:20.810946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.674 [2024-11-27 11:46:20.813528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.674 [2024-11-27 11:46:20.813612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:54.674 pt1 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:54.674 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.675 malloc2 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.675 [2024-11-27 11:46:20.870272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:54.675 [2024-11-27 11:46:20.870382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.675 [2024-11-27 11:46:20.870443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:54.675 [2024-11-27 11:46:20.870471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.675 [2024-11-27 11:46:20.872566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.675 [2024-11-27 11:46:20.872637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:54.675 pt2 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.675 [2024-11-27 11:46:20.882330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:54.675 [2024-11-27 11:46:20.884270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:54.675 [2024-11-27 11:46:20.884504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:54.675 [2024-11-27 11:46:20.884559] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:54.675 [2024-11-27 11:46:20.884876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:54.675 [2024-11-27 11:46:20.885074] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:54.675 [2024-11-27 11:46:20.885119] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:54.675 [2024-11-27 11:46:20.885309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.675 "name": "raid_bdev1", 00:07:54.675 "uuid": "51f58119-1e65-4f6a-808d-90a387337c50", 00:07:54.675 "strip_size_kb": 0, 00:07:54.675 "state": "online", 00:07:54.675 "raid_level": "raid1", 00:07:54.675 "superblock": true, 00:07:54.675 "num_base_bdevs": 2, 00:07:54.675 "num_base_bdevs_discovered": 2, 00:07:54.675 "num_base_bdevs_operational": 2, 00:07:54.675 "base_bdevs_list": [ 00:07:54.675 { 00:07:54.675 "name": "pt1", 00:07:54.675 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.675 "is_configured": true, 00:07:54.675 "data_offset": 2048, 00:07:54.675 "data_size": 63488 00:07:54.675 }, 00:07:54.675 { 00:07:54.675 "name": "pt2", 00:07:54.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.675 "is_configured": true, 00:07:54.675 "data_offset": 2048, 00:07:54.675 "data_size": 63488 00:07:54.675 } 00:07:54.675 ] 00:07:54.675 }' 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.675 11:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.934 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:54.934 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:54.934 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:54.934 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:54.934 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:54.934 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:54.934 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:54.934 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:54.934 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.934 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.934 [2024-11-27 11:46:21.289913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.934 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.194 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:55.194 "name": "raid_bdev1", 00:07:55.194 "aliases": [ 00:07:55.194 "51f58119-1e65-4f6a-808d-90a387337c50" 00:07:55.194 ], 00:07:55.194 "product_name": "Raid Volume", 00:07:55.194 "block_size": 512, 00:07:55.194 "num_blocks": 63488, 00:07:55.194 "uuid": "51f58119-1e65-4f6a-808d-90a387337c50", 00:07:55.194 "assigned_rate_limits": { 00:07:55.194 "rw_ios_per_sec": 0, 00:07:55.194 "rw_mbytes_per_sec": 0, 00:07:55.194 "r_mbytes_per_sec": 0, 00:07:55.194 "w_mbytes_per_sec": 0 00:07:55.194 }, 00:07:55.194 "claimed": false, 00:07:55.194 "zoned": false, 00:07:55.194 "supported_io_types": { 00:07:55.194 "read": true, 00:07:55.194 "write": true, 00:07:55.194 "unmap": false, 00:07:55.194 "flush": false, 00:07:55.194 "reset": true, 00:07:55.194 "nvme_admin": false, 00:07:55.194 "nvme_io": false, 00:07:55.194 "nvme_io_md": false, 00:07:55.194 "write_zeroes": true, 00:07:55.194 "zcopy": false, 00:07:55.194 "get_zone_info": false, 00:07:55.194 "zone_management": false, 00:07:55.194 "zone_append": false, 00:07:55.194 "compare": false, 00:07:55.194 "compare_and_write": false, 00:07:55.194 "abort": false, 00:07:55.194 "seek_hole": false, 00:07:55.194 "seek_data": false, 00:07:55.194 "copy": false, 00:07:55.194 "nvme_iov_md": false 00:07:55.194 }, 00:07:55.194 "memory_domains": [ 00:07:55.194 { 00:07:55.194 "dma_device_id": "system", 00:07:55.194 "dma_device_type": 1 00:07:55.194 }, 00:07:55.194 { 00:07:55.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.194 "dma_device_type": 2 00:07:55.194 }, 00:07:55.194 { 00:07:55.194 "dma_device_id": "system", 00:07:55.194 "dma_device_type": 1 00:07:55.194 }, 00:07:55.194 { 00:07:55.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.194 "dma_device_type": 2 00:07:55.194 } 00:07:55.194 ], 00:07:55.194 "driver_specific": { 00:07:55.194 "raid": { 00:07:55.194 "uuid": "51f58119-1e65-4f6a-808d-90a387337c50", 00:07:55.194 "strip_size_kb": 0, 00:07:55.194 "state": "online", 00:07:55.194 "raid_level": "raid1", 00:07:55.194 "superblock": true, 00:07:55.194 "num_base_bdevs": 2, 00:07:55.194 "num_base_bdevs_discovered": 2, 00:07:55.194 "num_base_bdevs_operational": 2, 00:07:55.194 "base_bdevs_list": [ 00:07:55.194 { 00:07:55.194 "name": "pt1", 00:07:55.194 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.194 "is_configured": true, 00:07:55.194 "data_offset": 2048, 00:07:55.194 "data_size": 63488 00:07:55.194 }, 00:07:55.194 { 00:07:55.194 "name": "pt2", 00:07:55.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.194 "is_configured": true, 00:07:55.194 "data_offset": 2048, 00:07:55.194 "data_size": 63488 00:07:55.194 } 00:07:55.194 ] 00:07:55.194 } 00:07:55.194 } 00:07:55.194 }' 00:07:55.194 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:55.194 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:55.194 pt2' 00:07:55.194 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.194 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:55.194 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.194 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:55.195 [2024-11-27 11:46:21.513524] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=51f58119-1e65-4f6a-808d-90a387337c50 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 51f58119-1e65-4f6a-808d-90a387337c50 ']' 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.195 [2024-11-27 11:46:21.553112] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.195 [2024-11-27 11:46:21.553182] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:55.195 [2024-11-27 11:46:21.553299] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.195 [2024-11-27 11:46:21.553389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.195 [2024-11-27 11:46:21.553437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.195 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.455 [2024-11-27 11:46:21.692953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:55.455 [2024-11-27 11:46:21.694821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:55.455 [2024-11-27 11:46:21.694952] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:55.455 [2024-11-27 11:46:21.695049] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:55.455 [2024-11-27 11:46:21.695124] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.455 [2024-11-27 11:46:21.695162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:55.455 request: 00:07:55.455 { 00:07:55.455 "name": "raid_bdev1", 00:07:55.455 "raid_level": "raid1", 00:07:55.455 "base_bdevs": [ 00:07:55.455 "malloc1", 00:07:55.455 "malloc2" 00:07:55.455 ], 00:07:55.455 "superblock": false, 00:07:55.455 "method": "bdev_raid_create", 00:07:55.455 "req_id": 1 00:07:55.455 } 00:07:55.455 Got JSON-RPC error response 00:07:55.455 response: 00:07:55.455 { 00:07:55.455 "code": -17, 00:07:55.455 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:55.455 } 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:55.455 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.456 [2024-11-27 11:46:21.748820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:55.456 [2024-11-27 11:46:21.748947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.456 [2024-11-27 11:46:21.748986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:55.456 [2024-11-27 11:46:21.749018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.456 [2024-11-27 11:46:21.751242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.456 [2024-11-27 11:46:21.751316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:55.456 [2024-11-27 11:46:21.751429] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:55.456 [2024-11-27 11:46:21.751507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:55.456 pt1 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.456 "name": "raid_bdev1", 00:07:55.456 "uuid": "51f58119-1e65-4f6a-808d-90a387337c50", 00:07:55.456 "strip_size_kb": 0, 00:07:55.456 "state": "configuring", 00:07:55.456 "raid_level": "raid1", 00:07:55.456 "superblock": true, 00:07:55.456 "num_base_bdevs": 2, 00:07:55.456 "num_base_bdevs_discovered": 1, 00:07:55.456 "num_base_bdevs_operational": 2, 00:07:55.456 "base_bdevs_list": [ 00:07:55.456 { 00:07:55.456 "name": "pt1", 00:07:55.456 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.456 "is_configured": true, 00:07:55.456 "data_offset": 2048, 00:07:55.456 "data_size": 63488 00:07:55.456 }, 00:07:55.456 { 00:07:55.456 "name": null, 00:07:55.456 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.456 "is_configured": false, 00:07:55.456 "data_offset": 2048, 00:07:55.456 "data_size": 63488 00:07:55.456 } 00:07:55.456 ] 00:07:55.456 }' 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.456 11:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.024 [2024-11-27 11:46:22.220015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:56.024 [2024-11-27 11:46:22.220140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.024 [2024-11-27 11:46:22.220189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:56.024 [2024-11-27 11:46:22.220243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.024 [2024-11-27 11:46:22.220743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.024 [2024-11-27 11:46:22.220809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:56.024 [2024-11-27 11:46:22.220932] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:56.024 [2024-11-27 11:46:22.221002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:56.024 [2024-11-27 11:46:22.221153] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:56.024 [2024-11-27 11:46:22.221190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:56.024 [2024-11-27 11:46:22.221453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:56.024 [2024-11-27 11:46:22.221640] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:56.024 [2024-11-27 11:46:22.221676] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:56.024 [2024-11-27 11:46:22.221867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.024 pt2 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.024 "name": "raid_bdev1", 00:07:56.024 "uuid": "51f58119-1e65-4f6a-808d-90a387337c50", 00:07:56.024 "strip_size_kb": 0, 00:07:56.024 "state": "online", 00:07:56.024 "raid_level": "raid1", 00:07:56.024 "superblock": true, 00:07:56.024 "num_base_bdevs": 2, 00:07:56.024 "num_base_bdevs_discovered": 2, 00:07:56.024 "num_base_bdevs_operational": 2, 00:07:56.024 "base_bdevs_list": [ 00:07:56.024 { 00:07:56.024 "name": "pt1", 00:07:56.024 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.024 "is_configured": true, 00:07:56.024 "data_offset": 2048, 00:07:56.024 "data_size": 63488 00:07:56.024 }, 00:07:56.024 { 00:07:56.024 "name": "pt2", 00:07:56.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.024 "is_configured": true, 00:07:56.024 "data_offset": 2048, 00:07:56.024 "data_size": 63488 00:07:56.024 } 00:07:56.024 ] 00:07:56.024 }' 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.024 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.283 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:56.283 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:56.283 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.283 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.283 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.283 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.283 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.283 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.283 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.283 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.283 [2024-11-27 11:46:22.647549] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.544 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.544 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.544 "name": "raid_bdev1", 00:07:56.544 "aliases": [ 00:07:56.544 "51f58119-1e65-4f6a-808d-90a387337c50" 00:07:56.544 ], 00:07:56.544 "product_name": "Raid Volume", 00:07:56.544 "block_size": 512, 00:07:56.544 "num_blocks": 63488, 00:07:56.544 "uuid": "51f58119-1e65-4f6a-808d-90a387337c50", 00:07:56.544 "assigned_rate_limits": { 00:07:56.544 "rw_ios_per_sec": 0, 00:07:56.544 "rw_mbytes_per_sec": 0, 00:07:56.544 "r_mbytes_per_sec": 0, 00:07:56.544 "w_mbytes_per_sec": 0 00:07:56.544 }, 00:07:56.544 "claimed": false, 00:07:56.544 "zoned": false, 00:07:56.544 "supported_io_types": { 00:07:56.544 "read": true, 00:07:56.544 "write": true, 00:07:56.544 "unmap": false, 00:07:56.544 "flush": false, 00:07:56.544 "reset": true, 00:07:56.544 "nvme_admin": false, 00:07:56.544 "nvme_io": false, 00:07:56.544 "nvme_io_md": false, 00:07:56.544 "write_zeroes": true, 00:07:56.544 "zcopy": false, 00:07:56.544 "get_zone_info": false, 00:07:56.544 "zone_management": false, 00:07:56.544 "zone_append": false, 00:07:56.544 "compare": false, 00:07:56.544 "compare_and_write": false, 00:07:56.544 "abort": false, 00:07:56.544 "seek_hole": false, 00:07:56.544 "seek_data": false, 00:07:56.544 "copy": false, 00:07:56.544 "nvme_iov_md": false 00:07:56.544 }, 00:07:56.544 "memory_domains": [ 00:07:56.544 { 00:07:56.544 "dma_device_id": "system", 00:07:56.544 "dma_device_type": 1 00:07:56.544 }, 00:07:56.544 { 00:07:56.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.544 "dma_device_type": 2 00:07:56.544 }, 00:07:56.544 { 00:07:56.544 "dma_device_id": "system", 00:07:56.544 "dma_device_type": 1 00:07:56.544 }, 00:07:56.544 { 00:07:56.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.544 "dma_device_type": 2 00:07:56.544 } 00:07:56.544 ], 00:07:56.544 "driver_specific": { 00:07:56.544 "raid": { 00:07:56.544 "uuid": "51f58119-1e65-4f6a-808d-90a387337c50", 00:07:56.544 "strip_size_kb": 0, 00:07:56.544 "state": "online", 00:07:56.545 "raid_level": "raid1", 00:07:56.545 "superblock": true, 00:07:56.545 "num_base_bdevs": 2, 00:07:56.545 "num_base_bdevs_discovered": 2, 00:07:56.545 "num_base_bdevs_operational": 2, 00:07:56.545 "base_bdevs_list": [ 00:07:56.545 { 00:07:56.545 "name": "pt1", 00:07:56.545 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.545 "is_configured": true, 00:07:56.545 "data_offset": 2048, 00:07:56.545 "data_size": 63488 00:07:56.545 }, 00:07:56.545 { 00:07:56.545 "name": "pt2", 00:07:56.545 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.545 "is_configured": true, 00:07:56.545 "data_offset": 2048, 00:07:56.545 "data_size": 63488 00:07:56.545 } 00:07:56.545 ] 00:07:56.545 } 00:07:56.545 } 00:07:56.545 }' 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:56.545 pt2' 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.545 [2024-11-27 11:46:22.871199] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 51f58119-1e65-4f6a-808d-90a387337c50 '!=' 51f58119-1e65-4f6a-808d-90a387337c50 ']' 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.545 [2024-11-27 11:46:22.918951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.545 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.805 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.805 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.805 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.805 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.805 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.805 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.805 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.805 "name": "raid_bdev1", 00:07:56.805 "uuid": "51f58119-1e65-4f6a-808d-90a387337c50", 00:07:56.805 "strip_size_kb": 0, 00:07:56.805 "state": "online", 00:07:56.805 "raid_level": "raid1", 00:07:56.805 "superblock": true, 00:07:56.805 "num_base_bdevs": 2, 00:07:56.805 "num_base_bdevs_discovered": 1, 00:07:56.805 "num_base_bdevs_operational": 1, 00:07:56.805 "base_bdevs_list": [ 00:07:56.805 { 00:07:56.805 "name": null, 00:07:56.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.805 "is_configured": false, 00:07:56.805 "data_offset": 0, 00:07:56.805 "data_size": 63488 00:07:56.805 }, 00:07:56.805 { 00:07:56.805 "name": "pt2", 00:07:56.805 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.805 "is_configured": true, 00:07:56.805 "data_offset": 2048, 00:07:56.805 "data_size": 63488 00:07:56.805 } 00:07:56.805 ] 00:07:56.805 }' 00:07:56.805 11:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.805 11:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.065 [2024-11-27 11:46:23.290224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.065 [2024-11-27 11:46:23.290298] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.065 [2024-11-27 11:46:23.290432] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.065 [2024-11-27 11:46:23.290516] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.065 [2024-11-27 11:46:23.290564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.065 [2024-11-27 11:46:23.346111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:57.065 [2024-11-27 11:46:23.346219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.065 [2024-11-27 11:46:23.346253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:57.065 [2024-11-27 11:46:23.346285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.065 [2024-11-27 11:46:23.348571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.065 [2024-11-27 11:46:23.348653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:57.065 [2024-11-27 11:46:23.348771] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:57.065 [2024-11-27 11:46:23.348863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:57.065 [2024-11-27 11:46:23.349020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:57.065 [2024-11-27 11:46:23.349037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:57.065 [2024-11-27 11:46:23.349265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:57.065 [2024-11-27 11:46:23.349410] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:57.065 [2024-11-27 11:46:23.349419] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:57.065 [2024-11-27 11:46:23.349553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.065 pt2 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.065 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.066 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.066 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.066 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.066 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.066 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.066 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.066 "name": "raid_bdev1", 00:07:57.066 "uuid": "51f58119-1e65-4f6a-808d-90a387337c50", 00:07:57.066 "strip_size_kb": 0, 00:07:57.066 "state": "online", 00:07:57.066 "raid_level": "raid1", 00:07:57.066 "superblock": true, 00:07:57.066 "num_base_bdevs": 2, 00:07:57.066 "num_base_bdevs_discovered": 1, 00:07:57.066 "num_base_bdevs_operational": 1, 00:07:57.066 "base_bdevs_list": [ 00:07:57.066 { 00:07:57.066 "name": null, 00:07:57.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.066 "is_configured": false, 00:07:57.066 "data_offset": 2048, 00:07:57.066 "data_size": 63488 00:07:57.066 }, 00:07:57.066 { 00:07:57.066 "name": "pt2", 00:07:57.066 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.066 "is_configured": true, 00:07:57.066 "data_offset": 2048, 00:07:57.066 "data_size": 63488 00:07:57.066 } 00:07:57.066 ] 00:07:57.066 }' 00:07:57.066 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.066 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.634 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:57.634 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.634 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.634 [2024-11-27 11:46:23.761384] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.634 [2024-11-27 11:46:23.761495] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.634 [2024-11-27 11:46:23.761607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.634 [2024-11-27 11:46:23.761679] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.635 [2024-11-27 11:46:23.761712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.635 [2024-11-27 11:46:23.821312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:57.635 [2024-11-27 11:46:23.821420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.635 [2024-11-27 11:46:23.821457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:57.635 [2024-11-27 11:46:23.821484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.635 [2024-11-27 11:46:23.823647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.635 [2024-11-27 11:46:23.823722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:57.635 [2024-11-27 11:46:23.823855] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:57.635 [2024-11-27 11:46:23.823930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:57.635 [2024-11-27 11:46:23.824106] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:57.635 [2024-11-27 11:46:23.824162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.635 [2024-11-27 11:46:23.824201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:07:57.635 [2024-11-27 11:46:23.824297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:57.635 [2024-11-27 11:46:23.824402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:07:57.635 [2024-11-27 11:46:23.824437] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:57.635 [2024-11-27 11:46:23.824701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:57.635 [2024-11-27 11:46:23.824899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:07:57.635 [2024-11-27 11:46:23.824945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:07:57.635 [2024-11-27 11:46:23.825129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.635 pt1 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.635 "name": "raid_bdev1", 00:07:57.635 "uuid": "51f58119-1e65-4f6a-808d-90a387337c50", 00:07:57.635 "strip_size_kb": 0, 00:07:57.635 "state": "online", 00:07:57.635 "raid_level": "raid1", 00:07:57.635 "superblock": true, 00:07:57.635 "num_base_bdevs": 2, 00:07:57.635 "num_base_bdevs_discovered": 1, 00:07:57.635 "num_base_bdevs_operational": 1, 00:07:57.635 "base_bdevs_list": [ 00:07:57.635 { 00:07:57.635 "name": null, 00:07:57.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.635 "is_configured": false, 00:07:57.635 "data_offset": 2048, 00:07:57.635 "data_size": 63488 00:07:57.635 }, 00:07:57.635 { 00:07:57.635 "name": "pt2", 00:07:57.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.635 "is_configured": true, 00:07:57.635 "data_offset": 2048, 00:07:57.635 "data_size": 63488 00:07:57.635 } 00:07:57.635 ] 00:07:57.635 }' 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.635 11:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:58.204 [2024-11-27 11:46:24.344662] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 51f58119-1e65-4f6a-808d-90a387337c50 '!=' 51f58119-1e65-4f6a-808d-90a387337c50 ']' 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63163 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63163 ']' 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63163 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63163 00:07:58.204 killing process with pid 63163 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63163' 00:07:58.204 11:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63163 00:07:58.204 [2024-11-27 11:46:24.419438] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.204 [2024-11-27 11:46:24.419575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.204 [2024-11-27 11:46:24.419634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.205 11:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63163 00:07:58.205 [2024-11-27 11:46:24.419650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:07:58.495 [2024-11-27 11:46:24.631646] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:59.434 11:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:59.434 ************************************ 00:07:59.434 END TEST raid_superblock_test 00:07:59.435 ************************************ 00:07:59.435 00:07:59.435 real 0m5.984s 00:07:59.435 user 0m9.020s 00:07:59.435 sys 0m1.042s 00:07:59.435 11:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.435 11:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.693 11:46:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:59.693 11:46:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:59.693 11:46:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.693 11:46:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:59.693 ************************************ 00:07:59.693 START TEST raid_read_error_test 00:07:59.693 ************************************ 00:07:59.693 11:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:07:59.693 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:59.693 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:59.693 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:59.693 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:59.693 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.693 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:59.693 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:59.693 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FrYnuOenEE 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63488 00:07:59.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63488 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63488 ']' 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.694 11:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:59.694 [2024-11-27 11:46:25.963496] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:07:59.694 [2024-11-27 11:46:25.963737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63488 ] 00:07:59.954 [2024-11-27 11:46:26.121254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.954 [2024-11-27 11:46:26.243204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.213 [2024-11-27 11:46:26.451634] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.213 [2024-11-27 11:46:26.451686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.473 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.473 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:00.473 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.473 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:00.473 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.473 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.733 BaseBdev1_malloc 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.734 true 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.734 [2024-11-27 11:46:26.870763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:00.734 [2024-11-27 11:46:26.870904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.734 [2024-11-27 11:46:26.870972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:00.734 [2024-11-27 11:46:26.871030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.734 [2024-11-27 11:46:26.873329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.734 [2024-11-27 11:46:26.873412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:00.734 BaseBdev1 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.734 BaseBdev2_malloc 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.734 true 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.734 [2024-11-27 11:46:26.926965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:00.734 [2024-11-27 11:46:26.927062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.734 [2024-11-27 11:46:26.927095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:00.734 [2024-11-27 11:46:26.927108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.734 [2024-11-27 11:46:26.929207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.734 [2024-11-27 11:46:26.929249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:00.734 BaseBdev2 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.734 [2024-11-27 11:46:26.935009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.734 [2024-11-27 11:46:26.936891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:00.734 [2024-11-27 11:46:26.937115] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:00.734 [2024-11-27 11:46:26.937152] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:00.734 [2024-11-27 11:46:26.937410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:00.734 [2024-11-27 11:46:26.937614] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:00.734 [2024-11-27 11:46:26.937629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:00.734 [2024-11-27 11:46:26.937791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.734 "name": "raid_bdev1", 00:08:00.734 "uuid": "94c74e00-c5cc-4922-9383-07bede3c47aa", 00:08:00.734 "strip_size_kb": 0, 00:08:00.734 "state": "online", 00:08:00.734 "raid_level": "raid1", 00:08:00.734 "superblock": true, 00:08:00.734 "num_base_bdevs": 2, 00:08:00.734 "num_base_bdevs_discovered": 2, 00:08:00.734 "num_base_bdevs_operational": 2, 00:08:00.734 "base_bdevs_list": [ 00:08:00.734 { 00:08:00.734 "name": "BaseBdev1", 00:08:00.734 "uuid": "b67a8334-dad2-5186-b9d1-e7063d4be795", 00:08:00.734 "is_configured": true, 00:08:00.734 "data_offset": 2048, 00:08:00.734 "data_size": 63488 00:08:00.734 }, 00:08:00.734 { 00:08:00.734 "name": "BaseBdev2", 00:08:00.734 "uuid": "1abf2bb6-d3ed-5154-9c81-7cea4ec1c58f", 00:08:00.734 "is_configured": true, 00:08:00.734 "data_offset": 2048, 00:08:00.734 "data_size": 63488 00:08:00.734 } 00:08:00.734 ] 00:08:00.734 }' 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.734 11:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.303 11:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:01.303 11:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:01.303 [2024-11-27 11:46:27.523438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.243 "name": "raid_bdev1", 00:08:02.243 "uuid": "94c74e00-c5cc-4922-9383-07bede3c47aa", 00:08:02.243 "strip_size_kb": 0, 00:08:02.243 "state": "online", 00:08:02.243 "raid_level": "raid1", 00:08:02.243 "superblock": true, 00:08:02.243 "num_base_bdevs": 2, 00:08:02.243 "num_base_bdevs_discovered": 2, 00:08:02.243 "num_base_bdevs_operational": 2, 00:08:02.243 "base_bdevs_list": [ 00:08:02.243 { 00:08:02.243 "name": "BaseBdev1", 00:08:02.243 "uuid": "b67a8334-dad2-5186-b9d1-e7063d4be795", 00:08:02.243 "is_configured": true, 00:08:02.243 "data_offset": 2048, 00:08:02.243 "data_size": 63488 00:08:02.243 }, 00:08:02.243 { 00:08:02.243 "name": "BaseBdev2", 00:08:02.243 "uuid": "1abf2bb6-d3ed-5154-9c81-7cea4ec1c58f", 00:08:02.243 "is_configured": true, 00:08:02.243 "data_offset": 2048, 00:08:02.243 "data_size": 63488 00:08:02.243 } 00:08:02.243 ] 00:08:02.243 }' 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.243 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.823 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:02.823 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.823 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.823 [2024-11-27 11:46:28.915885] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.823 [2024-11-27 11:46:28.916011] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.823 [2024-11-27 11:46:28.919228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.823 [2024-11-27 11:46:28.919337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.823 [2024-11-27 11:46:28.919447] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.823 [2024-11-27 11:46:28.919517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:02.823 { 00:08:02.823 "results": [ 00:08:02.823 { 00:08:02.823 "job": "raid_bdev1", 00:08:02.823 "core_mask": "0x1", 00:08:02.823 "workload": "randrw", 00:08:02.823 "percentage": 50, 00:08:02.823 "status": "finished", 00:08:02.823 "queue_depth": 1, 00:08:02.823 "io_size": 131072, 00:08:02.823 "runtime": 1.393469, 00:08:02.823 "iops": 16697.895683362887, 00:08:02.823 "mibps": 2087.236960420361, 00:08:02.823 "io_failed": 0, 00:08:02.823 "io_timeout": 0, 00:08:02.823 "avg_latency_us": 57.072315521513886, 00:08:02.823 "min_latency_us": 23.58777292576419, 00:08:02.823 "max_latency_us": 1473.844541484716 00:08:02.823 } 00:08:02.823 ], 00:08:02.823 "core_count": 1 00:08:02.823 } 00:08:02.823 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.823 11:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63488 00:08:02.824 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63488 ']' 00:08:02.824 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63488 00:08:02.824 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:02.824 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.824 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63488 00:08:02.824 killing process with pid 63488 00:08:02.824 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.824 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.824 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63488' 00:08:02.824 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63488 00:08:02.824 [2024-11-27 11:46:28.964084] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.824 11:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63488 00:08:02.824 [2024-11-27 11:46:29.114975] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.203 11:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:04.203 11:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FrYnuOenEE 00:08:04.203 11:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:04.203 11:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:04.203 11:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:04.203 11:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.203 11:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:04.203 11:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:04.203 00:08:04.203 real 0m4.474s 00:08:04.203 user 0m5.404s 00:08:04.203 sys 0m0.566s 00:08:04.203 ************************************ 00:08:04.203 END TEST raid_read_error_test 00:08:04.203 ************************************ 00:08:04.203 11:46:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.203 11:46:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.203 11:46:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:04.203 11:46:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:04.203 11:46:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.203 11:46:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.203 ************************************ 00:08:04.203 START TEST raid_write_error_test 00:08:04.203 ************************************ 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DWB244YXjM 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63628 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63628 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63628 ']' 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.203 11:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.203 [2024-11-27 11:46:30.510436] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:08:04.203 [2024-11-27 11:46:30.510626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63628 ] 00:08:04.463 [2024-11-27 11:46:30.668366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.463 [2024-11-27 11:46:30.787272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.723 [2024-11-27 11:46:30.995148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.723 [2024-11-27 11:46:30.995304] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.983 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.983 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:04.983 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:04.984 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:04.984 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.984 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.244 BaseBdev1_malloc 00:08:05.244 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.244 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:05.244 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.244 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.244 true 00:08:05.244 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.244 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:05.244 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.244 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.244 [2024-11-27 11:46:31.417442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:05.244 [2024-11-27 11:46:31.417544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.244 [2024-11-27 11:46:31.417584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:05.244 [2024-11-27 11:46:31.417620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.244 [2024-11-27 11:46:31.419811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.244 [2024-11-27 11:46:31.419900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:05.244 BaseBdev1 00:08:05.244 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.244 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:05.244 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.245 BaseBdev2_malloc 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.245 true 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.245 [2024-11-27 11:46:31.486121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:05.245 [2024-11-27 11:46:31.486255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.245 [2024-11-27 11:46:31.486292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:05.245 [2024-11-27 11:46:31.486321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.245 [2024-11-27 11:46:31.488564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.245 [2024-11-27 11:46:31.488647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:05.245 BaseBdev2 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.245 [2024-11-27 11:46:31.498155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.245 [2024-11-27 11:46:31.500067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:05.245 [2024-11-27 11:46:31.500290] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:05.245 [2024-11-27 11:46:31.500307] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:05.245 [2024-11-27 11:46:31.500587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:05.245 [2024-11-27 11:46:31.500773] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:05.245 [2024-11-27 11:46:31.500785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:05.245 [2024-11-27 11:46:31.501034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.245 "name": "raid_bdev1", 00:08:05.245 "uuid": "4fa5c078-e24c-4f66-9047-61281847af51", 00:08:05.245 "strip_size_kb": 0, 00:08:05.245 "state": "online", 00:08:05.245 "raid_level": "raid1", 00:08:05.245 "superblock": true, 00:08:05.245 "num_base_bdevs": 2, 00:08:05.245 "num_base_bdevs_discovered": 2, 00:08:05.245 "num_base_bdevs_operational": 2, 00:08:05.245 "base_bdevs_list": [ 00:08:05.245 { 00:08:05.245 "name": "BaseBdev1", 00:08:05.245 "uuid": "e4a62255-4936-5fad-aba7-d4badec3b402", 00:08:05.245 "is_configured": true, 00:08:05.245 "data_offset": 2048, 00:08:05.245 "data_size": 63488 00:08:05.245 }, 00:08:05.245 { 00:08:05.245 "name": "BaseBdev2", 00:08:05.245 "uuid": "d44a7e01-c3fe-5a21-9de2-bc923038fc61", 00:08:05.245 "is_configured": true, 00:08:05.245 "data_offset": 2048, 00:08:05.245 "data_size": 63488 00:08:05.245 } 00:08:05.245 ] 00:08:05.245 }' 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.245 11:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.815 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:05.815 11:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:05.815 [2024-11-27 11:46:32.058517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:06.811 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.812 [2024-11-27 11:46:32.934363] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:06.812 [2024-11-27 11:46:32.934491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:06.812 [2024-11-27 11:46:32.934716] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.812 "name": "raid_bdev1", 00:08:06.812 "uuid": "4fa5c078-e24c-4f66-9047-61281847af51", 00:08:06.812 "strip_size_kb": 0, 00:08:06.812 "state": "online", 00:08:06.812 "raid_level": "raid1", 00:08:06.812 "superblock": true, 00:08:06.812 "num_base_bdevs": 2, 00:08:06.812 "num_base_bdevs_discovered": 1, 00:08:06.812 "num_base_bdevs_operational": 1, 00:08:06.812 "base_bdevs_list": [ 00:08:06.812 { 00:08:06.812 "name": null, 00:08:06.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.812 "is_configured": false, 00:08:06.812 "data_offset": 0, 00:08:06.812 "data_size": 63488 00:08:06.812 }, 00:08:06.812 { 00:08:06.812 "name": "BaseBdev2", 00:08:06.812 "uuid": "d44a7e01-c3fe-5a21-9de2-bc923038fc61", 00:08:06.812 "is_configured": true, 00:08:06.812 "data_offset": 2048, 00:08:06.812 "data_size": 63488 00:08:06.812 } 00:08:06.812 ] 00:08:06.812 }' 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.812 11:46:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.072 11:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:07.072 11:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.072 11:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.072 [2024-11-27 11:46:33.379460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:07.072 [2024-11-27 11:46:33.379584] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:07.072 [2024-11-27 11:46:33.382241] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.072 [2024-11-27 11:46:33.382330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.072 [2024-11-27 11:46:33.382408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:07.072 [2024-11-27 11:46:33.382455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:07.072 { 00:08:07.073 "results": [ 00:08:07.073 { 00:08:07.073 "job": "raid_bdev1", 00:08:07.073 "core_mask": "0x1", 00:08:07.073 "workload": "randrw", 00:08:07.073 "percentage": 50, 00:08:07.073 "status": "finished", 00:08:07.073 "queue_depth": 1, 00:08:07.073 "io_size": 131072, 00:08:07.073 "runtime": 1.321595, 00:08:07.073 "iops": 20109.791577601307, 00:08:07.073 "mibps": 2513.7239472001634, 00:08:07.073 "io_failed": 0, 00:08:07.073 "io_timeout": 0, 00:08:07.073 "avg_latency_us": 46.96453974962427, 00:08:07.073 "min_latency_us": 22.581659388646287, 00:08:07.073 "max_latency_us": 1416.6078602620087 00:08:07.073 } 00:08:07.073 ], 00:08:07.073 "core_count": 1 00:08:07.073 } 00:08:07.073 11:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.073 11:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63628 00:08:07.073 11:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63628 ']' 00:08:07.073 11:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63628 00:08:07.073 11:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:07.073 11:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.073 11:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63628 00:08:07.073 killing process with pid 63628 00:08:07.073 11:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.073 11:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.073 11:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63628' 00:08:07.073 11:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63628 00:08:07.073 [2024-11-27 11:46:33.426209] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:07.073 11:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63628 00:08:07.332 [2024-11-27 11:46:33.560078] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:08.712 11:46:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DWB244YXjM 00:08:08.712 11:46:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:08.712 11:46:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:08.712 11:46:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:08.712 11:46:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:08.712 11:46:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:08.712 11:46:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:08.712 ************************************ 00:08:08.712 11:46:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:08.712 00:08:08.712 real 0m4.378s 00:08:08.712 user 0m5.297s 00:08:08.712 sys 0m0.516s 00:08:08.712 11:46:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.712 11:46:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.712 END TEST raid_write_error_test 00:08:08.712 ************************************ 00:08:08.712 11:46:34 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:08.712 11:46:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:08.712 11:46:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:08.712 11:46:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:08.712 11:46:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.712 11:46:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:08.712 ************************************ 00:08:08.712 START TEST raid_state_function_test 00:08:08.712 ************************************ 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63771 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63771' 00:08:08.712 Process raid pid: 63771 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63771 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63771 ']' 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.712 11:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.712 [2024-11-27 11:46:34.949759] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:08:08.712 [2024-11-27 11:46:34.949917] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.971 [2024-11-27 11:46:35.128761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.971 [2024-11-27 11:46:35.251408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.230 [2024-11-27 11:46:35.491521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.230 [2024-11-27 11:46:35.491667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.490 [2024-11-27 11:46:35.820201] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.490 [2024-11-27 11:46:35.820262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.490 [2024-11-27 11:46:35.820275] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.490 [2024-11-27 11:46:35.820285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.490 [2024-11-27 11:46:35.820293] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:09.490 [2024-11-27 11:46:35.820303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.490 11:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.750 11:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.750 "name": "Existed_Raid", 00:08:09.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.750 "strip_size_kb": 64, 00:08:09.750 "state": "configuring", 00:08:09.750 "raid_level": "raid0", 00:08:09.750 "superblock": false, 00:08:09.750 "num_base_bdevs": 3, 00:08:09.750 "num_base_bdevs_discovered": 0, 00:08:09.750 "num_base_bdevs_operational": 3, 00:08:09.750 "base_bdevs_list": [ 00:08:09.750 { 00:08:09.750 "name": "BaseBdev1", 00:08:09.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.750 "is_configured": false, 00:08:09.750 "data_offset": 0, 00:08:09.750 "data_size": 0 00:08:09.750 }, 00:08:09.750 { 00:08:09.750 "name": "BaseBdev2", 00:08:09.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.750 "is_configured": false, 00:08:09.750 "data_offset": 0, 00:08:09.750 "data_size": 0 00:08:09.750 }, 00:08:09.750 { 00:08:09.750 "name": "BaseBdev3", 00:08:09.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.750 "is_configured": false, 00:08:09.750 "data_offset": 0, 00:08:09.750 "data_size": 0 00:08:09.750 } 00:08:09.750 ] 00:08:09.750 }' 00:08:09.750 11:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.750 11:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.010 [2024-11-27 11:46:36.283422] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.010 [2024-11-27 11:46:36.283515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.010 [2024-11-27 11:46:36.295392] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.010 [2024-11-27 11:46:36.295502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.010 [2024-11-27 11:46:36.295546] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.010 [2024-11-27 11:46:36.295576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.010 [2024-11-27 11:46:36.295598] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:10.010 [2024-11-27 11:46:36.295628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.010 [2024-11-27 11:46:36.344860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.010 BaseBdev1 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.010 [ 00:08:10.010 { 00:08:10.010 "name": "BaseBdev1", 00:08:10.010 "aliases": [ 00:08:10.010 "ed51c36a-f9f7-41f4-ae9d-19a8d315c219" 00:08:10.010 ], 00:08:10.010 "product_name": "Malloc disk", 00:08:10.010 "block_size": 512, 00:08:10.010 "num_blocks": 65536, 00:08:10.010 "uuid": "ed51c36a-f9f7-41f4-ae9d-19a8d315c219", 00:08:10.010 "assigned_rate_limits": { 00:08:10.010 "rw_ios_per_sec": 0, 00:08:10.010 "rw_mbytes_per_sec": 0, 00:08:10.010 "r_mbytes_per_sec": 0, 00:08:10.010 "w_mbytes_per_sec": 0 00:08:10.010 }, 00:08:10.010 "claimed": true, 00:08:10.010 "claim_type": "exclusive_write", 00:08:10.010 "zoned": false, 00:08:10.010 "supported_io_types": { 00:08:10.010 "read": true, 00:08:10.010 "write": true, 00:08:10.010 "unmap": true, 00:08:10.010 "flush": true, 00:08:10.010 "reset": true, 00:08:10.010 "nvme_admin": false, 00:08:10.010 "nvme_io": false, 00:08:10.010 "nvme_io_md": false, 00:08:10.010 "write_zeroes": true, 00:08:10.010 "zcopy": true, 00:08:10.010 "get_zone_info": false, 00:08:10.010 "zone_management": false, 00:08:10.010 "zone_append": false, 00:08:10.010 "compare": false, 00:08:10.010 "compare_and_write": false, 00:08:10.010 "abort": true, 00:08:10.010 "seek_hole": false, 00:08:10.010 "seek_data": false, 00:08:10.010 "copy": true, 00:08:10.010 "nvme_iov_md": false 00:08:10.010 }, 00:08:10.010 "memory_domains": [ 00:08:10.010 { 00:08:10.010 "dma_device_id": "system", 00:08:10.010 "dma_device_type": 1 00:08:10.010 }, 00:08:10.010 { 00:08:10.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.010 "dma_device_type": 2 00:08:10.010 } 00:08:10.010 ], 00:08:10.010 "driver_specific": {} 00:08:10.010 } 00:08:10.010 ] 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.010 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.270 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.270 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.270 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.270 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.270 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.270 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.270 "name": "Existed_Raid", 00:08:10.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.270 "strip_size_kb": 64, 00:08:10.270 "state": "configuring", 00:08:10.270 "raid_level": "raid0", 00:08:10.270 "superblock": false, 00:08:10.270 "num_base_bdevs": 3, 00:08:10.270 "num_base_bdevs_discovered": 1, 00:08:10.270 "num_base_bdevs_operational": 3, 00:08:10.270 "base_bdevs_list": [ 00:08:10.270 { 00:08:10.270 "name": "BaseBdev1", 00:08:10.270 "uuid": "ed51c36a-f9f7-41f4-ae9d-19a8d315c219", 00:08:10.270 "is_configured": true, 00:08:10.270 "data_offset": 0, 00:08:10.270 "data_size": 65536 00:08:10.270 }, 00:08:10.270 { 00:08:10.270 "name": "BaseBdev2", 00:08:10.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.270 "is_configured": false, 00:08:10.270 "data_offset": 0, 00:08:10.270 "data_size": 0 00:08:10.270 }, 00:08:10.270 { 00:08:10.270 "name": "BaseBdev3", 00:08:10.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.270 "is_configured": false, 00:08:10.270 "data_offset": 0, 00:08:10.270 "data_size": 0 00:08:10.270 } 00:08:10.270 ] 00:08:10.270 }' 00:08:10.270 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.270 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.530 [2024-11-27 11:46:36.852045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.530 [2024-11-27 11:46:36.852171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.530 [2024-11-27 11:46:36.864096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.530 [2024-11-27 11:46:36.866371] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.530 [2024-11-27 11:46:36.866467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.530 [2024-11-27 11:46:36.866539] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:10.530 [2024-11-27 11:46:36.866595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.530 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.789 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.789 "name": "Existed_Raid", 00:08:10.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.789 "strip_size_kb": 64, 00:08:10.789 "state": "configuring", 00:08:10.789 "raid_level": "raid0", 00:08:10.789 "superblock": false, 00:08:10.789 "num_base_bdevs": 3, 00:08:10.789 "num_base_bdevs_discovered": 1, 00:08:10.789 "num_base_bdevs_operational": 3, 00:08:10.789 "base_bdevs_list": [ 00:08:10.789 { 00:08:10.789 "name": "BaseBdev1", 00:08:10.789 "uuid": "ed51c36a-f9f7-41f4-ae9d-19a8d315c219", 00:08:10.789 "is_configured": true, 00:08:10.789 "data_offset": 0, 00:08:10.789 "data_size": 65536 00:08:10.789 }, 00:08:10.789 { 00:08:10.789 "name": "BaseBdev2", 00:08:10.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.789 "is_configured": false, 00:08:10.789 "data_offset": 0, 00:08:10.789 "data_size": 0 00:08:10.789 }, 00:08:10.789 { 00:08:10.789 "name": "BaseBdev3", 00:08:10.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.789 "is_configured": false, 00:08:10.789 "data_offset": 0, 00:08:10.789 "data_size": 0 00:08:10.789 } 00:08:10.789 ] 00:08:10.789 }' 00:08:10.789 11:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.789 11:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.049 [2024-11-27 11:46:37.379246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.049 BaseBdev2 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.049 [ 00:08:11.049 { 00:08:11.049 "name": "BaseBdev2", 00:08:11.049 "aliases": [ 00:08:11.049 "8b1ace2e-cd89-423b-8682-69abd01ea318" 00:08:11.049 ], 00:08:11.049 "product_name": "Malloc disk", 00:08:11.049 "block_size": 512, 00:08:11.049 "num_blocks": 65536, 00:08:11.049 "uuid": "8b1ace2e-cd89-423b-8682-69abd01ea318", 00:08:11.049 "assigned_rate_limits": { 00:08:11.049 "rw_ios_per_sec": 0, 00:08:11.049 "rw_mbytes_per_sec": 0, 00:08:11.049 "r_mbytes_per_sec": 0, 00:08:11.049 "w_mbytes_per_sec": 0 00:08:11.049 }, 00:08:11.049 "claimed": true, 00:08:11.049 "claim_type": "exclusive_write", 00:08:11.049 "zoned": false, 00:08:11.049 "supported_io_types": { 00:08:11.049 "read": true, 00:08:11.049 "write": true, 00:08:11.049 "unmap": true, 00:08:11.049 "flush": true, 00:08:11.049 "reset": true, 00:08:11.049 "nvme_admin": false, 00:08:11.049 "nvme_io": false, 00:08:11.049 "nvme_io_md": false, 00:08:11.049 "write_zeroes": true, 00:08:11.049 "zcopy": true, 00:08:11.049 "get_zone_info": false, 00:08:11.049 "zone_management": false, 00:08:11.049 "zone_append": false, 00:08:11.049 "compare": false, 00:08:11.049 "compare_and_write": false, 00:08:11.049 "abort": true, 00:08:11.049 "seek_hole": false, 00:08:11.049 "seek_data": false, 00:08:11.049 "copy": true, 00:08:11.049 "nvme_iov_md": false 00:08:11.049 }, 00:08:11.049 "memory_domains": [ 00:08:11.049 { 00:08:11.049 "dma_device_id": "system", 00:08:11.049 "dma_device_type": 1 00:08:11.049 }, 00:08:11.049 { 00:08:11.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.049 "dma_device_type": 2 00:08:11.049 } 00:08:11.049 ], 00:08:11.049 "driver_specific": {} 00:08:11.049 } 00:08:11.049 ] 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.049 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.341 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.341 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.341 "name": "Existed_Raid", 00:08:11.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.341 "strip_size_kb": 64, 00:08:11.341 "state": "configuring", 00:08:11.341 "raid_level": "raid0", 00:08:11.341 "superblock": false, 00:08:11.341 "num_base_bdevs": 3, 00:08:11.341 "num_base_bdevs_discovered": 2, 00:08:11.341 "num_base_bdevs_operational": 3, 00:08:11.341 "base_bdevs_list": [ 00:08:11.341 { 00:08:11.341 "name": "BaseBdev1", 00:08:11.341 "uuid": "ed51c36a-f9f7-41f4-ae9d-19a8d315c219", 00:08:11.341 "is_configured": true, 00:08:11.341 "data_offset": 0, 00:08:11.341 "data_size": 65536 00:08:11.341 }, 00:08:11.341 { 00:08:11.341 "name": "BaseBdev2", 00:08:11.341 "uuid": "8b1ace2e-cd89-423b-8682-69abd01ea318", 00:08:11.341 "is_configured": true, 00:08:11.341 "data_offset": 0, 00:08:11.341 "data_size": 65536 00:08:11.341 }, 00:08:11.341 { 00:08:11.341 "name": "BaseBdev3", 00:08:11.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.341 "is_configured": false, 00:08:11.341 "data_offset": 0, 00:08:11.341 "data_size": 0 00:08:11.341 } 00:08:11.341 ] 00:08:11.341 }' 00:08:11.341 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.341 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.629 [2024-11-27 11:46:37.941510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:11.629 [2024-11-27 11:46:37.941665] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:11.629 [2024-11-27 11:46:37.941698] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:11.629 [2024-11-27 11:46:37.942073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:11.629 [2024-11-27 11:46:37.942344] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:11.629 BaseBdev3 00:08:11.629 [2024-11-27 11:46:37.942404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:11.629 [2024-11-27 11:46:37.942718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.629 [ 00:08:11.629 { 00:08:11.629 "name": "BaseBdev3", 00:08:11.629 "aliases": [ 00:08:11.629 "4c91e8e0-1498-4f06-bf3c-ec6e72bd7aae" 00:08:11.629 ], 00:08:11.629 "product_name": "Malloc disk", 00:08:11.629 "block_size": 512, 00:08:11.629 "num_blocks": 65536, 00:08:11.629 "uuid": "4c91e8e0-1498-4f06-bf3c-ec6e72bd7aae", 00:08:11.629 "assigned_rate_limits": { 00:08:11.629 "rw_ios_per_sec": 0, 00:08:11.629 "rw_mbytes_per_sec": 0, 00:08:11.629 "r_mbytes_per_sec": 0, 00:08:11.629 "w_mbytes_per_sec": 0 00:08:11.629 }, 00:08:11.629 "claimed": true, 00:08:11.629 "claim_type": "exclusive_write", 00:08:11.629 "zoned": false, 00:08:11.629 "supported_io_types": { 00:08:11.629 "read": true, 00:08:11.629 "write": true, 00:08:11.629 "unmap": true, 00:08:11.629 "flush": true, 00:08:11.629 "reset": true, 00:08:11.629 "nvme_admin": false, 00:08:11.629 "nvme_io": false, 00:08:11.629 "nvme_io_md": false, 00:08:11.629 "write_zeroes": true, 00:08:11.629 "zcopy": true, 00:08:11.629 "get_zone_info": false, 00:08:11.629 "zone_management": false, 00:08:11.629 "zone_append": false, 00:08:11.629 "compare": false, 00:08:11.629 "compare_and_write": false, 00:08:11.629 "abort": true, 00:08:11.629 "seek_hole": false, 00:08:11.629 "seek_data": false, 00:08:11.629 "copy": true, 00:08:11.629 "nvme_iov_md": false 00:08:11.629 }, 00:08:11.629 "memory_domains": [ 00:08:11.629 { 00:08:11.629 "dma_device_id": "system", 00:08:11.629 "dma_device_type": 1 00:08:11.629 }, 00:08:11.629 { 00:08:11.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.629 "dma_device_type": 2 00:08:11.629 } 00:08:11.629 ], 00:08:11.629 "driver_specific": {} 00:08:11.629 } 00:08:11.629 ] 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.629 11:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.629 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.888 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.888 "name": "Existed_Raid", 00:08:11.888 "uuid": "273e510f-1973-4f72-9ada-4472d947a17c", 00:08:11.888 "strip_size_kb": 64, 00:08:11.888 "state": "online", 00:08:11.888 "raid_level": "raid0", 00:08:11.888 "superblock": false, 00:08:11.888 "num_base_bdevs": 3, 00:08:11.888 "num_base_bdevs_discovered": 3, 00:08:11.888 "num_base_bdevs_operational": 3, 00:08:11.888 "base_bdevs_list": [ 00:08:11.888 { 00:08:11.888 "name": "BaseBdev1", 00:08:11.888 "uuid": "ed51c36a-f9f7-41f4-ae9d-19a8d315c219", 00:08:11.888 "is_configured": true, 00:08:11.888 "data_offset": 0, 00:08:11.888 "data_size": 65536 00:08:11.888 }, 00:08:11.888 { 00:08:11.888 "name": "BaseBdev2", 00:08:11.888 "uuid": "8b1ace2e-cd89-423b-8682-69abd01ea318", 00:08:11.888 "is_configured": true, 00:08:11.888 "data_offset": 0, 00:08:11.888 "data_size": 65536 00:08:11.888 }, 00:08:11.888 { 00:08:11.888 "name": "BaseBdev3", 00:08:11.888 "uuid": "4c91e8e0-1498-4f06-bf3c-ec6e72bd7aae", 00:08:11.888 "is_configured": true, 00:08:11.888 "data_offset": 0, 00:08:11.888 "data_size": 65536 00:08:11.888 } 00:08:11.888 ] 00:08:11.888 }' 00:08:11.888 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.888 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.148 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:12.149 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:12.149 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:12.149 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:12.149 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:12.149 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:12.149 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:12.149 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:12.149 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.149 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.149 [2024-11-27 11:46:38.457086] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.149 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.149 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:12.149 "name": "Existed_Raid", 00:08:12.149 "aliases": [ 00:08:12.149 "273e510f-1973-4f72-9ada-4472d947a17c" 00:08:12.149 ], 00:08:12.149 "product_name": "Raid Volume", 00:08:12.149 "block_size": 512, 00:08:12.149 "num_blocks": 196608, 00:08:12.149 "uuid": "273e510f-1973-4f72-9ada-4472d947a17c", 00:08:12.149 "assigned_rate_limits": { 00:08:12.149 "rw_ios_per_sec": 0, 00:08:12.149 "rw_mbytes_per_sec": 0, 00:08:12.149 "r_mbytes_per_sec": 0, 00:08:12.149 "w_mbytes_per_sec": 0 00:08:12.149 }, 00:08:12.149 "claimed": false, 00:08:12.149 "zoned": false, 00:08:12.149 "supported_io_types": { 00:08:12.149 "read": true, 00:08:12.149 "write": true, 00:08:12.149 "unmap": true, 00:08:12.149 "flush": true, 00:08:12.149 "reset": true, 00:08:12.149 "nvme_admin": false, 00:08:12.149 "nvme_io": false, 00:08:12.149 "nvme_io_md": false, 00:08:12.149 "write_zeroes": true, 00:08:12.149 "zcopy": false, 00:08:12.149 "get_zone_info": false, 00:08:12.149 "zone_management": false, 00:08:12.149 "zone_append": false, 00:08:12.149 "compare": false, 00:08:12.149 "compare_and_write": false, 00:08:12.149 "abort": false, 00:08:12.149 "seek_hole": false, 00:08:12.149 "seek_data": false, 00:08:12.149 "copy": false, 00:08:12.149 "nvme_iov_md": false 00:08:12.149 }, 00:08:12.149 "memory_domains": [ 00:08:12.149 { 00:08:12.149 "dma_device_id": "system", 00:08:12.149 "dma_device_type": 1 00:08:12.149 }, 00:08:12.149 { 00:08:12.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.149 "dma_device_type": 2 00:08:12.149 }, 00:08:12.149 { 00:08:12.149 "dma_device_id": "system", 00:08:12.149 "dma_device_type": 1 00:08:12.149 }, 00:08:12.149 { 00:08:12.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.149 "dma_device_type": 2 00:08:12.149 }, 00:08:12.149 { 00:08:12.149 "dma_device_id": "system", 00:08:12.149 "dma_device_type": 1 00:08:12.149 }, 00:08:12.149 { 00:08:12.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.149 "dma_device_type": 2 00:08:12.149 } 00:08:12.149 ], 00:08:12.149 "driver_specific": { 00:08:12.149 "raid": { 00:08:12.149 "uuid": "273e510f-1973-4f72-9ada-4472d947a17c", 00:08:12.149 "strip_size_kb": 64, 00:08:12.149 "state": "online", 00:08:12.149 "raid_level": "raid0", 00:08:12.149 "superblock": false, 00:08:12.149 "num_base_bdevs": 3, 00:08:12.149 "num_base_bdevs_discovered": 3, 00:08:12.149 "num_base_bdevs_operational": 3, 00:08:12.149 "base_bdevs_list": [ 00:08:12.149 { 00:08:12.149 "name": "BaseBdev1", 00:08:12.149 "uuid": "ed51c36a-f9f7-41f4-ae9d-19a8d315c219", 00:08:12.149 "is_configured": true, 00:08:12.149 "data_offset": 0, 00:08:12.149 "data_size": 65536 00:08:12.149 }, 00:08:12.149 { 00:08:12.149 "name": "BaseBdev2", 00:08:12.149 "uuid": "8b1ace2e-cd89-423b-8682-69abd01ea318", 00:08:12.149 "is_configured": true, 00:08:12.149 "data_offset": 0, 00:08:12.149 "data_size": 65536 00:08:12.149 }, 00:08:12.149 { 00:08:12.149 "name": "BaseBdev3", 00:08:12.149 "uuid": "4c91e8e0-1498-4f06-bf3c-ec6e72bd7aae", 00:08:12.149 "is_configured": true, 00:08:12.149 "data_offset": 0, 00:08:12.149 "data_size": 65536 00:08:12.149 } 00:08:12.149 ] 00:08:12.149 } 00:08:12.149 } 00:08:12.149 }' 00:08:12.149 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.408 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:12.408 BaseBdev2 00:08:12.408 BaseBdev3' 00:08:12.408 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.408 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.408 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.408 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:12.408 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.408 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.408 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.409 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.409 [2024-11-27 11:46:38.764270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:12.409 [2024-11-27 11:46:38.764366] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.409 [2024-11-27 11:46:38.764484] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.668 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.668 "name": "Existed_Raid", 00:08:12.668 "uuid": "273e510f-1973-4f72-9ada-4472d947a17c", 00:08:12.668 "strip_size_kb": 64, 00:08:12.668 "state": "offline", 00:08:12.668 "raid_level": "raid0", 00:08:12.668 "superblock": false, 00:08:12.668 "num_base_bdevs": 3, 00:08:12.668 "num_base_bdevs_discovered": 2, 00:08:12.669 "num_base_bdevs_operational": 2, 00:08:12.669 "base_bdevs_list": [ 00:08:12.669 { 00:08:12.669 "name": null, 00:08:12.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.669 "is_configured": false, 00:08:12.669 "data_offset": 0, 00:08:12.669 "data_size": 65536 00:08:12.669 }, 00:08:12.669 { 00:08:12.669 "name": "BaseBdev2", 00:08:12.669 "uuid": "8b1ace2e-cd89-423b-8682-69abd01ea318", 00:08:12.669 "is_configured": true, 00:08:12.669 "data_offset": 0, 00:08:12.669 "data_size": 65536 00:08:12.669 }, 00:08:12.669 { 00:08:12.669 "name": "BaseBdev3", 00:08:12.669 "uuid": "4c91e8e0-1498-4f06-bf3c-ec6e72bd7aae", 00:08:12.669 "is_configured": true, 00:08:12.669 "data_offset": 0, 00:08:12.669 "data_size": 65536 00:08:12.669 } 00:08:12.669 ] 00:08:12.669 }' 00:08:12.669 11:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.669 11:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.238 [2024-11-27 11:46:39.415227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.238 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.238 [2024-11-27 11:46:39.572673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:13.239 [2024-11-27 11:46:39.572736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.499 BaseBdev2 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.499 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.499 [ 00:08:13.499 { 00:08:13.499 "name": "BaseBdev2", 00:08:13.499 "aliases": [ 00:08:13.499 "02f2f0d7-cb50-4ccf-948e-0e227863f5c9" 00:08:13.499 ], 00:08:13.499 "product_name": "Malloc disk", 00:08:13.499 "block_size": 512, 00:08:13.499 "num_blocks": 65536, 00:08:13.499 "uuid": "02f2f0d7-cb50-4ccf-948e-0e227863f5c9", 00:08:13.499 "assigned_rate_limits": { 00:08:13.499 "rw_ios_per_sec": 0, 00:08:13.499 "rw_mbytes_per_sec": 0, 00:08:13.499 "r_mbytes_per_sec": 0, 00:08:13.499 "w_mbytes_per_sec": 0 00:08:13.500 }, 00:08:13.500 "claimed": false, 00:08:13.500 "zoned": false, 00:08:13.500 "supported_io_types": { 00:08:13.500 "read": true, 00:08:13.500 "write": true, 00:08:13.500 "unmap": true, 00:08:13.500 "flush": true, 00:08:13.500 "reset": true, 00:08:13.500 "nvme_admin": false, 00:08:13.500 "nvme_io": false, 00:08:13.500 "nvme_io_md": false, 00:08:13.500 "write_zeroes": true, 00:08:13.500 "zcopy": true, 00:08:13.500 "get_zone_info": false, 00:08:13.500 "zone_management": false, 00:08:13.500 "zone_append": false, 00:08:13.500 "compare": false, 00:08:13.500 "compare_and_write": false, 00:08:13.500 "abort": true, 00:08:13.500 "seek_hole": false, 00:08:13.500 "seek_data": false, 00:08:13.500 "copy": true, 00:08:13.500 "nvme_iov_md": false 00:08:13.500 }, 00:08:13.500 "memory_domains": [ 00:08:13.500 { 00:08:13.500 "dma_device_id": "system", 00:08:13.500 "dma_device_type": 1 00:08:13.500 }, 00:08:13.500 { 00:08:13.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.500 "dma_device_type": 2 00:08:13.500 } 00:08:13.500 ], 00:08:13.500 "driver_specific": {} 00:08:13.500 } 00:08:13.500 ] 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.500 BaseBdev3 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.500 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.759 [ 00:08:13.759 { 00:08:13.759 "name": "BaseBdev3", 00:08:13.759 "aliases": [ 00:08:13.759 "0b554946-19a7-49db-aa78-3ae06139d99b" 00:08:13.759 ], 00:08:13.759 "product_name": "Malloc disk", 00:08:13.759 "block_size": 512, 00:08:13.759 "num_blocks": 65536, 00:08:13.759 "uuid": "0b554946-19a7-49db-aa78-3ae06139d99b", 00:08:13.759 "assigned_rate_limits": { 00:08:13.759 "rw_ios_per_sec": 0, 00:08:13.759 "rw_mbytes_per_sec": 0, 00:08:13.759 "r_mbytes_per_sec": 0, 00:08:13.759 "w_mbytes_per_sec": 0 00:08:13.759 }, 00:08:13.759 "claimed": false, 00:08:13.759 "zoned": false, 00:08:13.759 "supported_io_types": { 00:08:13.759 "read": true, 00:08:13.759 "write": true, 00:08:13.759 "unmap": true, 00:08:13.759 "flush": true, 00:08:13.759 "reset": true, 00:08:13.759 "nvme_admin": false, 00:08:13.759 "nvme_io": false, 00:08:13.759 "nvme_io_md": false, 00:08:13.759 "write_zeroes": true, 00:08:13.759 "zcopy": true, 00:08:13.759 "get_zone_info": false, 00:08:13.760 "zone_management": false, 00:08:13.760 "zone_append": false, 00:08:13.760 "compare": false, 00:08:13.760 "compare_and_write": false, 00:08:13.760 "abort": true, 00:08:13.760 "seek_hole": false, 00:08:13.760 "seek_data": false, 00:08:13.760 "copy": true, 00:08:13.760 "nvme_iov_md": false 00:08:13.760 }, 00:08:13.760 "memory_domains": [ 00:08:13.760 { 00:08:13.760 "dma_device_id": "system", 00:08:13.760 "dma_device_type": 1 00:08:13.760 }, 00:08:13.760 { 00:08:13.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.760 "dma_device_type": 2 00:08:13.760 } 00:08:13.760 ], 00:08:13.760 "driver_specific": {} 00:08:13.760 } 00:08:13.760 ] 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.760 [2024-11-27 11:46:39.908372] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.760 [2024-11-27 11:46:39.908497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.760 [2024-11-27 11:46:39.908555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:13.760 [2024-11-27 11:46:39.910539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.760 "name": "Existed_Raid", 00:08:13.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.760 "strip_size_kb": 64, 00:08:13.760 "state": "configuring", 00:08:13.760 "raid_level": "raid0", 00:08:13.760 "superblock": false, 00:08:13.760 "num_base_bdevs": 3, 00:08:13.760 "num_base_bdevs_discovered": 2, 00:08:13.760 "num_base_bdevs_operational": 3, 00:08:13.760 "base_bdevs_list": [ 00:08:13.760 { 00:08:13.760 "name": "BaseBdev1", 00:08:13.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.760 "is_configured": false, 00:08:13.760 "data_offset": 0, 00:08:13.760 "data_size": 0 00:08:13.760 }, 00:08:13.760 { 00:08:13.760 "name": "BaseBdev2", 00:08:13.760 "uuid": "02f2f0d7-cb50-4ccf-948e-0e227863f5c9", 00:08:13.760 "is_configured": true, 00:08:13.760 "data_offset": 0, 00:08:13.760 "data_size": 65536 00:08:13.760 }, 00:08:13.760 { 00:08:13.760 "name": "BaseBdev3", 00:08:13.760 "uuid": "0b554946-19a7-49db-aa78-3ae06139d99b", 00:08:13.760 "is_configured": true, 00:08:13.760 "data_offset": 0, 00:08:13.760 "data_size": 65536 00:08:13.760 } 00:08:13.760 ] 00:08:13.760 }' 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.760 11:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.020 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:14.020 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.020 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.020 [2024-11-27 11:46:40.391572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:14.020 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.020 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.020 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.020 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.020 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.020 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.020 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.020 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.020 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.020 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.020 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.020 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.020 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.280 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.280 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.280 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.280 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.280 "name": "Existed_Raid", 00:08:14.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.280 "strip_size_kb": 64, 00:08:14.280 "state": "configuring", 00:08:14.280 "raid_level": "raid0", 00:08:14.280 "superblock": false, 00:08:14.280 "num_base_bdevs": 3, 00:08:14.280 "num_base_bdevs_discovered": 1, 00:08:14.280 "num_base_bdevs_operational": 3, 00:08:14.280 "base_bdevs_list": [ 00:08:14.280 { 00:08:14.280 "name": "BaseBdev1", 00:08:14.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.280 "is_configured": false, 00:08:14.280 "data_offset": 0, 00:08:14.280 "data_size": 0 00:08:14.280 }, 00:08:14.280 { 00:08:14.280 "name": null, 00:08:14.280 "uuid": "02f2f0d7-cb50-4ccf-948e-0e227863f5c9", 00:08:14.280 "is_configured": false, 00:08:14.280 "data_offset": 0, 00:08:14.280 "data_size": 65536 00:08:14.280 }, 00:08:14.280 { 00:08:14.280 "name": "BaseBdev3", 00:08:14.280 "uuid": "0b554946-19a7-49db-aa78-3ae06139d99b", 00:08:14.280 "is_configured": true, 00:08:14.280 "data_offset": 0, 00:08:14.280 "data_size": 65536 00:08:14.280 } 00:08:14.280 ] 00:08:14.280 }' 00:08:14.280 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.280 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.538 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.538 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:14.538 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.538 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.538 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.538 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:14.538 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:14.538 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.538 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.798 [2024-11-27 11:46:40.924761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.798 BaseBdev1 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.798 [ 00:08:14.798 { 00:08:14.798 "name": "BaseBdev1", 00:08:14.798 "aliases": [ 00:08:14.798 "fc7fdf61-816b-4da9-b47d-fcf8c3a73243" 00:08:14.798 ], 00:08:14.798 "product_name": "Malloc disk", 00:08:14.798 "block_size": 512, 00:08:14.798 "num_blocks": 65536, 00:08:14.798 "uuid": "fc7fdf61-816b-4da9-b47d-fcf8c3a73243", 00:08:14.798 "assigned_rate_limits": { 00:08:14.798 "rw_ios_per_sec": 0, 00:08:14.798 "rw_mbytes_per_sec": 0, 00:08:14.798 "r_mbytes_per_sec": 0, 00:08:14.798 "w_mbytes_per_sec": 0 00:08:14.798 }, 00:08:14.798 "claimed": true, 00:08:14.798 "claim_type": "exclusive_write", 00:08:14.798 "zoned": false, 00:08:14.798 "supported_io_types": { 00:08:14.798 "read": true, 00:08:14.798 "write": true, 00:08:14.798 "unmap": true, 00:08:14.798 "flush": true, 00:08:14.798 "reset": true, 00:08:14.798 "nvme_admin": false, 00:08:14.798 "nvme_io": false, 00:08:14.798 "nvme_io_md": false, 00:08:14.798 "write_zeroes": true, 00:08:14.798 "zcopy": true, 00:08:14.798 "get_zone_info": false, 00:08:14.798 "zone_management": false, 00:08:14.798 "zone_append": false, 00:08:14.798 "compare": false, 00:08:14.798 "compare_and_write": false, 00:08:14.798 "abort": true, 00:08:14.798 "seek_hole": false, 00:08:14.798 "seek_data": false, 00:08:14.798 "copy": true, 00:08:14.798 "nvme_iov_md": false 00:08:14.798 }, 00:08:14.798 "memory_domains": [ 00:08:14.798 { 00:08:14.798 "dma_device_id": "system", 00:08:14.798 "dma_device_type": 1 00:08:14.798 }, 00:08:14.798 { 00:08:14.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.798 "dma_device_type": 2 00:08:14.798 } 00:08:14.798 ], 00:08:14.798 "driver_specific": {} 00:08:14.798 } 00:08:14.798 ] 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.798 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.798 "name": "Existed_Raid", 00:08:14.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.798 "strip_size_kb": 64, 00:08:14.798 "state": "configuring", 00:08:14.798 "raid_level": "raid0", 00:08:14.798 "superblock": false, 00:08:14.798 "num_base_bdevs": 3, 00:08:14.798 "num_base_bdevs_discovered": 2, 00:08:14.798 "num_base_bdevs_operational": 3, 00:08:14.798 "base_bdevs_list": [ 00:08:14.798 { 00:08:14.798 "name": "BaseBdev1", 00:08:14.798 "uuid": "fc7fdf61-816b-4da9-b47d-fcf8c3a73243", 00:08:14.798 "is_configured": true, 00:08:14.798 "data_offset": 0, 00:08:14.798 "data_size": 65536 00:08:14.798 }, 00:08:14.798 { 00:08:14.798 "name": null, 00:08:14.798 "uuid": "02f2f0d7-cb50-4ccf-948e-0e227863f5c9", 00:08:14.798 "is_configured": false, 00:08:14.798 "data_offset": 0, 00:08:14.798 "data_size": 65536 00:08:14.798 }, 00:08:14.798 { 00:08:14.799 "name": "BaseBdev3", 00:08:14.799 "uuid": "0b554946-19a7-49db-aa78-3ae06139d99b", 00:08:14.799 "is_configured": true, 00:08:14.799 "data_offset": 0, 00:08:14.799 "data_size": 65536 00:08:14.799 } 00:08:14.799 ] 00:08:14.799 }' 00:08:14.799 11:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.799 11:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.058 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.058 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.058 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.058 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:15.058 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.317 [2024-11-27 11:46:41.463930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.317 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.317 "name": "Existed_Raid", 00:08:15.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.317 "strip_size_kb": 64, 00:08:15.317 "state": "configuring", 00:08:15.317 "raid_level": "raid0", 00:08:15.317 "superblock": false, 00:08:15.317 "num_base_bdevs": 3, 00:08:15.317 "num_base_bdevs_discovered": 1, 00:08:15.317 "num_base_bdevs_operational": 3, 00:08:15.317 "base_bdevs_list": [ 00:08:15.317 { 00:08:15.317 "name": "BaseBdev1", 00:08:15.317 "uuid": "fc7fdf61-816b-4da9-b47d-fcf8c3a73243", 00:08:15.318 "is_configured": true, 00:08:15.318 "data_offset": 0, 00:08:15.318 "data_size": 65536 00:08:15.318 }, 00:08:15.318 { 00:08:15.318 "name": null, 00:08:15.318 "uuid": "02f2f0d7-cb50-4ccf-948e-0e227863f5c9", 00:08:15.318 "is_configured": false, 00:08:15.318 "data_offset": 0, 00:08:15.318 "data_size": 65536 00:08:15.318 }, 00:08:15.318 { 00:08:15.318 "name": null, 00:08:15.318 "uuid": "0b554946-19a7-49db-aa78-3ae06139d99b", 00:08:15.318 "is_configured": false, 00:08:15.318 "data_offset": 0, 00:08:15.318 "data_size": 65536 00:08:15.318 } 00:08:15.318 ] 00:08:15.318 }' 00:08:15.318 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.318 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.577 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.577 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:15.577 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.577 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.577 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.836 [2024-11-27 11:46:41.971134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.836 11:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.836 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.836 "name": "Existed_Raid", 00:08:15.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.836 "strip_size_kb": 64, 00:08:15.836 "state": "configuring", 00:08:15.836 "raid_level": "raid0", 00:08:15.836 "superblock": false, 00:08:15.836 "num_base_bdevs": 3, 00:08:15.836 "num_base_bdevs_discovered": 2, 00:08:15.836 "num_base_bdevs_operational": 3, 00:08:15.836 "base_bdevs_list": [ 00:08:15.836 { 00:08:15.836 "name": "BaseBdev1", 00:08:15.836 "uuid": "fc7fdf61-816b-4da9-b47d-fcf8c3a73243", 00:08:15.836 "is_configured": true, 00:08:15.836 "data_offset": 0, 00:08:15.836 "data_size": 65536 00:08:15.836 }, 00:08:15.836 { 00:08:15.836 "name": null, 00:08:15.836 "uuid": "02f2f0d7-cb50-4ccf-948e-0e227863f5c9", 00:08:15.836 "is_configured": false, 00:08:15.836 "data_offset": 0, 00:08:15.836 "data_size": 65536 00:08:15.836 }, 00:08:15.836 { 00:08:15.836 "name": "BaseBdev3", 00:08:15.836 "uuid": "0b554946-19a7-49db-aa78-3ae06139d99b", 00:08:15.836 "is_configured": true, 00:08:15.836 "data_offset": 0, 00:08:15.836 "data_size": 65536 00:08:15.836 } 00:08:15.836 ] 00:08:15.836 }' 00:08:15.836 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.836 11:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.095 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:16.095 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.095 11:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.095 11:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.095 11:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.095 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:16.095 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:16.095 11:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.095 11:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.095 [2024-11-27 11:46:42.458330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:16.354 11:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.354 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.354 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.354 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.354 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.354 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.354 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.354 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.354 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.354 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.354 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.354 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.354 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.354 11:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.354 11:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.354 11:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.354 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.354 "name": "Existed_Raid", 00:08:16.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.354 "strip_size_kb": 64, 00:08:16.354 "state": "configuring", 00:08:16.354 "raid_level": "raid0", 00:08:16.354 "superblock": false, 00:08:16.354 "num_base_bdevs": 3, 00:08:16.354 "num_base_bdevs_discovered": 1, 00:08:16.354 "num_base_bdevs_operational": 3, 00:08:16.355 "base_bdevs_list": [ 00:08:16.355 { 00:08:16.355 "name": null, 00:08:16.355 "uuid": "fc7fdf61-816b-4da9-b47d-fcf8c3a73243", 00:08:16.355 "is_configured": false, 00:08:16.355 "data_offset": 0, 00:08:16.355 "data_size": 65536 00:08:16.355 }, 00:08:16.355 { 00:08:16.355 "name": null, 00:08:16.355 "uuid": "02f2f0d7-cb50-4ccf-948e-0e227863f5c9", 00:08:16.355 "is_configured": false, 00:08:16.355 "data_offset": 0, 00:08:16.355 "data_size": 65536 00:08:16.355 }, 00:08:16.355 { 00:08:16.355 "name": "BaseBdev3", 00:08:16.355 "uuid": "0b554946-19a7-49db-aa78-3ae06139d99b", 00:08:16.355 "is_configured": true, 00:08:16.355 "data_offset": 0, 00:08:16.355 "data_size": 65536 00:08:16.355 } 00:08:16.355 ] 00:08:16.355 }' 00:08:16.355 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.355 11:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.614 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.614 11:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:16.614 11:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.614 11:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.874 11:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.874 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:16.874 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.875 [2024-11-27 11:46:43.030514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.875 "name": "Existed_Raid", 00:08:16.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.875 "strip_size_kb": 64, 00:08:16.875 "state": "configuring", 00:08:16.875 "raid_level": "raid0", 00:08:16.875 "superblock": false, 00:08:16.875 "num_base_bdevs": 3, 00:08:16.875 "num_base_bdevs_discovered": 2, 00:08:16.875 "num_base_bdevs_operational": 3, 00:08:16.875 "base_bdevs_list": [ 00:08:16.875 { 00:08:16.875 "name": null, 00:08:16.875 "uuid": "fc7fdf61-816b-4da9-b47d-fcf8c3a73243", 00:08:16.875 "is_configured": false, 00:08:16.875 "data_offset": 0, 00:08:16.875 "data_size": 65536 00:08:16.875 }, 00:08:16.875 { 00:08:16.875 "name": "BaseBdev2", 00:08:16.875 "uuid": "02f2f0d7-cb50-4ccf-948e-0e227863f5c9", 00:08:16.875 "is_configured": true, 00:08:16.875 "data_offset": 0, 00:08:16.875 "data_size": 65536 00:08:16.875 }, 00:08:16.875 { 00:08:16.875 "name": "BaseBdev3", 00:08:16.875 "uuid": "0b554946-19a7-49db-aa78-3ae06139d99b", 00:08:16.875 "is_configured": true, 00:08:16.875 "data_offset": 0, 00:08:16.875 "data_size": 65536 00:08:16.875 } 00:08:16.875 ] 00:08:16.875 }' 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.875 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.134 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:17.134 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.134 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.134 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.134 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.134 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:17.134 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.134 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.134 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.134 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:17.134 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fc7fdf61-816b-4da9-b47d-fcf8c3a73243 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.435 [2024-11-27 11:46:43.599137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:17.435 [2024-11-27 11:46:43.599279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:17.435 [2024-11-27 11:46:43.599308] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:17.435 [2024-11-27 11:46:43.599681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:17.435 NewBaseBdev 00:08:17.435 [2024-11-27 11:46:43.599954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:17.435 [2024-11-27 11:46:43.599977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:17.435 [2024-11-27 11:46:43.600296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.435 [ 00:08:17.435 { 00:08:17.435 "name": "NewBaseBdev", 00:08:17.435 "aliases": [ 00:08:17.435 "fc7fdf61-816b-4da9-b47d-fcf8c3a73243" 00:08:17.435 ], 00:08:17.435 "product_name": "Malloc disk", 00:08:17.435 "block_size": 512, 00:08:17.435 "num_blocks": 65536, 00:08:17.435 "uuid": "fc7fdf61-816b-4da9-b47d-fcf8c3a73243", 00:08:17.435 "assigned_rate_limits": { 00:08:17.435 "rw_ios_per_sec": 0, 00:08:17.435 "rw_mbytes_per_sec": 0, 00:08:17.435 "r_mbytes_per_sec": 0, 00:08:17.435 "w_mbytes_per_sec": 0 00:08:17.435 }, 00:08:17.435 "claimed": true, 00:08:17.435 "claim_type": "exclusive_write", 00:08:17.435 "zoned": false, 00:08:17.435 "supported_io_types": { 00:08:17.435 "read": true, 00:08:17.435 "write": true, 00:08:17.435 "unmap": true, 00:08:17.435 "flush": true, 00:08:17.435 "reset": true, 00:08:17.435 "nvme_admin": false, 00:08:17.435 "nvme_io": false, 00:08:17.435 "nvme_io_md": false, 00:08:17.435 "write_zeroes": true, 00:08:17.435 "zcopy": true, 00:08:17.435 "get_zone_info": false, 00:08:17.435 "zone_management": false, 00:08:17.435 "zone_append": false, 00:08:17.435 "compare": false, 00:08:17.435 "compare_and_write": false, 00:08:17.435 "abort": true, 00:08:17.435 "seek_hole": false, 00:08:17.435 "seek_data": false, 00:08:17.435 "copy": true, 00:08:17.435 "nvme_iov_md": false 00:08:17.435 }, 00:08:17.435 "memory_domains": [ 00:08:17.435 { 00:08:17.435 "dma_device_id": "system", 00:08:17.435 "dma_device_type": 1 00:08:17.435 }, 00:08:17.435 { 00:08:17.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.435 "dma_device_type": 2 00:08:17.435 } 00:08:17.435 ], 00:08:17.435 "driver_specific": {} 00:08:17.435 } 00:08:17.435 ] 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.435 "name": "Existed_Raid", 00:08:17.435 "uuid": "722663d4-e03a-4429-87fa-c8097955482f", 00:08:17.435 "strip_size_kb": 64, 00:08:17.435 "state": "online", 00:08:17.435 "raid_level": "raid0", 00:08:17.435 "superblock": false, 00:08:17.435 "num_base_bdevs": 3, 00:08:17.435 "num_base_bdevs_discovered": 3, 00:08:17.435 "num_base_bdevs_operational": 3, 00:08:17.435 "base_bdevs_list": [ 00:08:17.435 { 00:08:17.435 "name": "NewBaseBdev", 00:08:17.435 "uuid": "fc7fdf61-816b-4da9-b47d-fcf8c3a73243", 00:08:17.435 "is_configured": true, 00:08:17.435 "data_offset": 0, 00:08:17.435 "data_size": 65536 00:08:17.435 }, 00:08:17.435 { 00:08:17.435 "name": "BaseBdev2", 00:08:17.435 "uuid": "02f2f0d7-cb50-4ccf-948e-0e227863f5c9", 00:08:17.435 "is_configured": true, 00:08:17.435 "data_offset": 0, 00:08:17.435 "data_size": 65536 00:08:17.435 }, 00:08:17.435 { 00:08:17.435 "name": "BaseBdev3", 00:08:17.435 "uuid": "0b554946-19a7-49db-aa78-3ae06139d99b", 00:08:17.435 "is_configured": true, 00:08:17.435 "data_offset": 0, 00:08:17.435 "data_size": 65536 00:08:17.435 } 00:08:17.435 ] 00:08:17.435 }' 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.435 11:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:18.006 [2024-11-27 11:46:44.102683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:18.006 "name": "Existed_Raid", 00:08:18.006 "aliases": [ 00:08:18.006 "722663d4-e03a-4429-87fa-c8097955482f" 00:08:18.006 ], 00:08:18.006 "product_name": "Raid Volume", 00:08:18.006 "block_size": 512, 00:08:18.006 "num_blocks": 196608, 00:08:18.006 "uuid": "722663d4-e03a-4429-87fa-c8097955482f", 00:08:18.006 "assigned_rate_limits": { 00:08:18.006 "rw_ios_per_sec": 0, 00:08:18.006 "rw_mbytes_per_sec": 0, 00:08:18.006 "r_mbytes_per_sec": 0, 00:08:18.006 "w_mbytes_per_sec": 0 00:08:18.006 }, 00:08:18.006 "claimed": false, 00:08:18.006 "zoned": false, 00:08:18.006 "supported_io_types": { 00:08:18.006 "read": true, 00:08:18.006 "write": true, 00:08:18.006 "unmap": true, 00:08:18.006 "flush": true, 00:08:18.006 "reset": true, 00:08:18.006 "nvme_admin": false, 00:08:18.006 "nvme_io": false, 00:08:18.006 "nvme_io_md": false, 00:08:18.006 "write_zeroes": true, 00:08:18.006 "zcopy": false, 00:08:18.006 "get_zone_info": false, 00:08:18.006 "zone_management": false, 00:08:18.006 "zone_append": false, 00:08:18.006 "compare": false, 00:08:18.006 "compare_and_write": false, 00:08:18.006 "abort": false, 00:08:18.006 "seek_hole": false, 00:08:18.006 "seek_data": false, 00:08:18.006 "copy": false, 00:08:18.006 "nvme_iov_md": false 00:08:18.006 }, 00:08:18.006 "memory_domains": [ 00:08:18.006 { 00:08:18.006 "dma_device_id": "system", 00:08:18.006 "dma_device_type": 1 00:08:18.006 }, 00:08:18.006 { 00:08:18.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.006 "dma_device_type": 2 00:08:18.006 }, 00:08:18.006 { 00:08:18.006 "dma_device_id": "system", 00:08:18.006 "dma_device_type": 1 00:08:18.006 }, 00:08:18.006 { 00:08:18.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.006 "dma_device_type": 2 00:08:18.006 }, 00:08:18.006 { 00:08:18.006 "dma_device_id": "system", 00:08:18.006 "dma_device_type": 1 00:08:18.006 }, 00:08:18.006 { 00:08:18.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.006 "dma_device_type": 2 00:08:18.006 } 00:08:18.006 ], 00:08:18.006 "driver_specific": { 00:08:18.006 "raid": { 00:08:18.006 "uuid": "722663d4-e03a-4429-87fa-c8097955482f", 00:08:18.006 "strip_size_kb": 64, 00:08:18.006 "state": "online", 00:08:18.006 "raid_level": "raid0", 00:08:18.006 "superblock": false, 00:08:18.006 "num_base_bdevs": 3, 00:08:18.006 "num_base_bdevs_discovered": 3, 00:08:18.006 "num_base_bdevs_operational": 3, 00:08:18.006 "base_bdevs_list": [ 00:08:18.006 { 00:08:18.006 "name": "NewBaseBdev", 00:08:18.006 "uuid": "fc7fdf61-816b-4da9-b47d-fcf8c3a73243", 00:08:18.006 "is_configured": true, 00:08:18.006 "data_offset": 0, 00:08:18.006 "data_size": 65536 00:08:18.006 }, 00:08:18.006 { 00:08:18.006 "name": "BaseBdev2", 00:08:18.006 "uuid": "02f2f0d7-cb50-4ccf-948e-0e227863f5c9", 00:08:18.006 "is_configured": true, 00:08:18.006 "data_offset": 0, 00:08:18.006 "data_size": 65536 00:08:18.006 }, 00:08:18.006 { 00:08:18.006 "name": "BaseBdev3", 00:08:18.006 "uuid": "0b554946-19a7-49db-aa78-3ae06139d99b", 00:08:18.006 "is_configured": true, 00:08:18.006 "data_offset": 0, 00:08:18.006 "data_size": 65536 00:08:18.006 } 00:08:18.006 ] 00:08:18.006 } 00:08:18.006 } 00:08:18.006 }' 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:18.006 BaseBdev2 00:08:18.006 BaseBdev3' 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.006 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.007 [2024-11-27 11:46:44.381889] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:18.007 [2024-11-27 11:46:44.381968] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.007 [2024-11-27 11:46:44.382124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.007 [2024-11-27 11:46:44.382237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:18.007 [2024-11-27 11:46:44.382304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63771 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63771 ']' 00:08:18.007 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63771 00:08:18.267 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:18.267 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.267 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63771 00:08:18.267 killing process with pid 63771 00:08:18.267 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.267 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.267 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63771' 00:08:18.267 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63771 00:08:18.267 [2024-11-27 11:46:44.419502] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:18.267 11:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63771 00:08:18.526 [2024-11-27 11:46:44.749168] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:19.907 11:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:19.907 00:08:19.907 real 0m11.100s 00:08:19.907 user 0m17.712s 00:08:19.907 sys 0m1.890s 00:08:19.907 11:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.907 11:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.907 ************************************ 00:08:19.907 END TEST raid_state_function_test 00:08:19.907 ************************************ 00:08:19.907 11:46:45 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:19.907 11:46:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:19.907 11:46:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.907 11:46:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:19.907 ************************************ 00:08:19.907 START TEST raid_state_function_test_sb 00:08:19.907 ************************************ 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:19.907 Process raid pid: 64398 00:08:19.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64398 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64398' 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64398 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64398 ']' 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.907 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:19.907 [2024-11-27 11:46:46.108048] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:08:19.907 [2024-11-27 11:46:46.108293] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.907 [2024-11-27 11:46:46.268563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.167 [2024-11-27 11:46:46.390779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.427 [2024-11-27 11:46:46.616059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.428 [2024-11-27 11:46:46.616202] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.687 11:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.687 11:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:20.687 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:20.687 11:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.687 11:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.687 [2024-11-27 11:46:46.966034] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:20.688 [2024-11-27 11:46:46.966130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:20.688 [2024-11-27 11:46:46.966160] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:20.688 [2024-11-27 11:46:46.966183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:20.688 [2024-11-27 11:46:46.966201] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:20.688 [2024-11-27 11:46:46.966223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:20.688 11:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.688 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.688 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.688 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.688 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.688 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.688 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.688 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.688 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.688 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.688 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.688 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.688 11:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.688 11:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.688 11:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.688 11:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.688 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.688 "name": "Existed_Raid", 00:08:20.688 "uuid": "d6fd0629-9e52-40bf-8d2c-ef216d0e2b1c", 00:08:20.688 "strip_size_kb": 64, 00:08:20.688 "state": "configuring", 00:08:20.688 "raid_level": "raid0", 00:08:20.688 "superblock": true, 00:08:20.688 "num_base_bdevs": 3, 00:08:20.688 "num_base_bdevs_discovered": 0, 00:08:20.688 "num_base_bdevs_operational": 3, 00:08:20.688 "base_bdevs_list": [ 00:08:20.688 { 00:08:20.688 "name": "BaseBdev1", 00:08:20.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.688 "is_configured": false, 00:08:20.688 "data_offset": 0, 00:08:20.688 "data_size": 0 00:08:20.688 }, 00:08:20.688 { 00:08:20.688 "name": "BaseBdev2", 00:08:20.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.688 "is_configured": false, 00:08:20.688 "data_offset": 0, 00:08:20.688 "data_size": 0 00:08:20.688 }, 00:08:20.688 { 00:08:20.688 "name": "BaseBdev3", 00:08:20.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.688 "is_configured": false, 00:08:20.688 "data_offset": 0, 00:08:20.688 "data_size": 0 00:08:20.688 } 00:08:20.688 ] 00:08:20.688 }' 00:08:20.688 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.688 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.257 [2024-11-27 11:46:47.449140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.257 [2024-11-27 11:46:47.449236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.257 [2024-11-27 11:46:47.457105] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.257 [2024-11-27 11:46:47.457184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.257 [2024-11-27 11:46:47.457243] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.257 [2024-11-27 11:46:47.457262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.257 [2024-11-27 11:46:47.457271] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:21.257 [2024-11-27 11:46:47.457281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.257 [2024-11-27 11:46:47.502198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:21.257 BaseBdev1 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.257 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.257 [ 00:08:21.257 { 00:08:21.257 "name": "BaseBdev1", 00:08:21.257 "aliases": [ 00:08:21.257 "0d19ab5b-b712-41dd-a7d0-a16731431cde" 00:08:21.257 ], 00:08:21.257 "product_name": "Malloc disk", 00:08:21.257 "block_size": 512, 00:08:21.257 "num_blocks": 65536, 00:08:21.257 "uuid": "0d19ab5b-b712-41dd-a7d0-a16731431cde", 00:08:21.257 "assigned_rate_limits": { 00:08:21.257 "rw_ios_per_sec": 0, 00:08:21.257 "rw_mbytes_per_sec": 0, 00:08:21.257 "r_mbytes_per_sec": 0, 00:08:21.257 "w_mbytes_per_sec": 0 00:08:21.257 }, 00:08:21.257 "claimed": true, 00:08:21.257 "claim_type": "exclusive_write", 00:08:21.257 "zoned": false, 00:08:21.257 "supported_io_types": { 00:08:21.257 "read": true, 00:08:21.257 "write": true, 00:08:21.257 "unmap": true, 00:08:21.257 "flush": true, 00:08:21.257 "reset": true, 00:08:21.257 "nvme_admin": false, 00:08:21.257 "nvme_io": false, 00:08:21.257 "nvme_io_md": false, 00:08:21.257 "write_zeroes": true, 00:08:21.257 "zcopy": true, 00:08:21.257 "get_zone_info": false, 00:08:21.257 "zone_management": false, 00:08:21.257 "zone_append": false, 00:08:21.257 "compare": false, 00:08:21.257 "compare_and_write": false, 00:08:21.257 "abort": true, 00:08:21.257 "seek_hole": false, 00:08:21.257 "seek_data": false, 00:08:21.257 "copy": true, 00:08:21.257 "nvme_iov_md": false 00:08:21.257 }, 00:08:21.257 "memory_domains": [ 00:08:21.257 { 00:08:21.257 "dma_device_id": "system", 00:08:21.257 "dma_device_type": 1 00:08:21.257 }, 00:08:21.257 { 00:08:21.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.257 "dma_device_type": 2 00:08:21.257 } 00:08:21.257 ], 00:08:21.257 "driver_specific": {} 00:08:21.257 } 00:08:21.258 ] 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.258 "name": "Existed_Raid", 00:08:21.258 "uuid": "d0bee278-ed8a-4e2d-860d-06709846db14", 00:08:21.258 "strip_size_kb": 64, 00:08:21.258 "state": "configuring", 00:08:21.258 "raid_level": "raid0", 00:08:21.258 "superblock": true, 00:08:21.258 "num_base_bdevs": 3, 00:08:21.258 "num_base_bdevs_discovered": 1, 00:08:21.258 "num_base_bdevs_operational": 3, 00:08:21.258 "base_bdevs_list": [ 00:08:21.258 { 00:08:21.258 "name": "BaseBdev1", 00:08:21.258 "uuid": "0d19ab5b-b712-41dd-a7d0-a16731431cde", 00:08:21.258 "is_configured": true, 00:08:21.258 "data_offset": 2048, 00:08:21.258 "data_size": 63488 00:08:21.258 }, 00:08:21.258 { 00:08:21.258 "name": "BaseBdev2", 00:08:21.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.258 "is_configured": false, 00:08:21.258 "data_offset": 0, 00:08:21.258 "data_size": 0 00:08:21.258 }, 00:08:21.258 { 00:08:21.258 "name": "BaseBdev3", 00:08:21.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.258 "is_configured": false, 00:08:21.258 "data_offset": 0, 00:08:21.258 "data_size": 0 00:08:21.258 } 00:08:21.258 ] 00:08:21.258 }' 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.258 11:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.827 [2024-11-27 11:46:48.013417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.827 [2024-11-27 11:46:48.013562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.827 [2024-11-27 11:46:48.025446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:21.827 [2024-11-27 11:46:48.027500] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.827 [2024-11-27 11:46:48.027590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.827 [2024-11-27 11:46:48.027624] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:21.827 [2024-11-27 11:46:48.027647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.827 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.827 "name": "Existed_Raid", 00:08:21.827 "uuid": "f099dcf0-0af4-48eb-83a8-62d30b38549b", 00:08:21.827 "strip_size_kb": 64, 00:08:21.827 "state": "configuring", 00:08:21.827 "raid_level": "raid0", 00:08:21.827 "superblock": true, 00:08:21.827 "num_base_bdevs": 3, 00:08:21.827 "num_base_bdevs_discovered": 1, 00:08:21.827 "num_base_bdevs_operational": 3, 00:08:21.827 "base_bdevs_list": [ 00:08:21.827 { 00:08:21.827 "name": "BaseBdev1", 00:08:21.827 "uuid": "0d19ab5b-b712-41dd-a7d0-a16731431cde", 00:08:21.827 "is_configured": true, 00:08:21.827 "data_offset": 2048, 00:08:21.827 "data_size": 63488 00:08:21.827 }, 00:08:21.827 { 00:08:21.827 "name": "BaseBdev2", 00:08:21.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.827 "is_configured": false, 00:08:21.827 "data_offset": 0, 00:08:21.827 "data_size": 0 00:08:21.827 }, 00:08:21.827 { 00:08:21.827 "name": "BaseBdev3", 00:08:21.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.827 "is_configured": false, 00:08:21.827 "data_offset": 0, 00:08:21.827 "data_size": 0 00:08:21.827 } 00:08:21.827 ] 00:08:21.827 }' 00:08:21.828 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.828 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.396 [2024-11-27 11:46:48.524585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.396 BaseBdev2 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.396 [ 00:08:22.396 { 00:08:22.396 "name": "BaseBdev2", 00:08:22.396 "aliases": [ 00:08:22.396 "182ffaec-fc1a-4a69-ac92-f0e25f96c205" 00:08:22.396 ], 00:08:22.396 "product_name": "Malloc disk", 00:08:22.396 "block_size": 512, 00:08:22.396 "num_blocks": 65536, 00:08:22.396 "uuid": "182ffaec-fc1a-4a69-ac92-f0e25f96c205", 00:08:22.396 "assigned_rate_limits": { 00:08:22.396 "rw_ios_per_sec": 0, 00:08:22.396 "rw_mbytes_per_sec": 0, 00:08:22.396 "r_mbytes_per_sec": 0, 00:08:22.396 "w_mbytes_per_sec": 0 00:08:22.396 }, 00:08:22.396 "claimed": true, 00:08:22.396 "claim_type": "exclusive_write", 00:08:22.396 "zoned": false, 00:08:22.396 "supported_io_types": { 00:08:22.396 "read": true, 00:08:22.396 "write": true, 00:08:22.396 "unmap": true, 00:08:22.396 "flush": true, 00:08:22.396 "reset": true, 00:08:22.396 "nvme_admin": false, 00:08:22.396 "nvme_io": false, 00:08:22.396 "nvme_io_md": false, 00:08:22.396 "write_zeroes": true, 00:08:22.396 "zcopy": true, 00:08:22.396 "get_zone_info": false, 00:08:22.396 "zone_management": false, 00:08:22.396 "zone_append": false, 00:08:22.396 "compare": false, 00:08:22.396 "compare_and_write": false, 00:08:22.396 "abort": true, 00:08:22.396 "seek_hole": false, 00:08:22.396 "seek_data": false, 00:08:22.396 "copy": true, 00:08:22.396 "nvme_iov_md": false 00:08:22.396 }, 00:08:22.396 "memory_domains": [ 00:08:22.396 { 00:08:22.396 "dma_device_id": "system", 00:08:22.396 "dma_device_type": 1 00:08:22.396 }, 00:08:22.396 { 00:08:22.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.396 "dma_device_type": 2 00:08:22.396 } 00:08:22.396 ], 00:08:22.396 "driver_specific": {} 00:08:22.396 } 00:08:22.396 ] 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.396 "name": "Existed_Raid", 00:08:22.396 "uuid": "f099dcf0-0af4-48eb-83a8-62d30b38549b", 00:08:22.396 "strip_size_kb": 64, 00:08:22.396 "state": "configuring", 00:08:22.396 "raid_level": "raid0", 00:08:22.396 "superblock": true, 00:08:22.396 "num_base_bdevs": 3, 00:08:22.396 "num_base_bdevs_discovered": 2, 00:08:22.396 "num_base_bdevs_operational": 3, 00:08:22.396 "base_bdevs_list": [ 00:08:22.396 { 00:08:22.396 "name": "BaseBdev1", 00:08:22.396 "uuid": "0d19ab5b-b712-41dd-a7d0-a16731431cde", 00:08:22.396 "is_configured": true, 00:08:22.396 "data_offset": 2048, 00:08:22.396 "data_size": 63488 00:08:22.396 }, 00:08:22.396 { 00:08:22.396 "name": "BaseBdev2", 00:08:22.396 "uuid": "182ffaec-fc1a-4a69-ac92-f0e25f96c205", 00:08:22.396 "is_configured": true, 00:08:22.396 "data_offset": 2048, 00:08:22.396 "data_size": 63488 00:08:22.396 }, 00:08:22.396 { 00:08:22.396 "name": "BaseBdev3", 00:08:22.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.396 "is_configured": false, 00:08:22.396 "data_offset": 0, 00:08:22.396 "data_size": 0 00:08:22.396 } 00:08:22.396 ] 00:08:22.396 }' 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.396 11:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.656 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:22.656 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.656 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.916 [2024-11-27 11:46:49.069437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:22.916 [2024-11-27 11:46:49.069810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:22.916 [2024-11-27 11:46:49.069913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:22.916 BaseBdev3 00:08:22.916 [2024-11-27 11:46:49.070246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:22.916 [2024-11-27 11:46:49.070445] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:22.916 [2024-11-27 11:46:49.070466] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:22.916 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.916 [2024-11-27 11:46:49.070643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.916 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:22.916 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:22.916 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.916 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:22.916 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.916 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.916 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.916 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.916 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.916 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.916 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:22.916 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.916 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.916 [ 00:08:22.916 { 00:08:22.916 "name": "BaseBdev3", 00:08:22.916 "aliases": [ 00:08:22.916 "b11d6702-d85f-4123-8c9c-b4d5d6cc9382" 00:08:22.916 ], 00:08:22.916 "product_name": "Malloc disk", 00:08:22.916 "block_size": 512, 00:08:22.916 "num_blocks": 65536, 00:08:22.916 "uuid": "b11d6702-d85f-4123-8c9c-b4d5d6cc9382", 00:08:22.916 "assigned_rate_limits": { 00:08:22.916 "rw_ios_per_sec": 0, 00:08:22.916 "rw_mbytes_per_sec": 0, 00:08:22.916 "r_mbytes_per_sec": 0, 00:08:22.916 "w_mbytes_per_sec": 0 00:08:22.916 }, 00:08:22.916 "claimed": true, 00:08:22.916 "claim_type": "exclusive_write", 00:08:22.916 "zoned": false, 00:08:22.916 "supported_io_types": { 00:08:22.916 "read": true, 00:08:22.916 "write": true, 00:08:22.916 "unmap": true, 00:08:22.916 "flush": true, 00:08:22.916 "reset": true, 00:08:22.916 "nvme_admin": false, 00:08:22.916 "nvme_io": false, 00:08:22.916 "nvme_io_md": false, 00:08:22.916 "write_zeroes": true, 00:08:22.916 "zcopy": true, 00:08:22.916 "get_zone_info": false, 00:08:22.916 "zone_management": false, 00:08:22.916 "zone_append": false, 00:08:22.916 "compare": false, 00:08:22.916 "compare_and_write": false, 00:08:22.916 "abort": true, 00:08:22.916 "seek_hole": false, 00:08:22.916 "seek_data": false, 00:08:22.916 "copy": true, 00:08:22.916 "nvme_iov_md": false 00:08:22.916 }, 00:08:22.916 "memory_domains": [ 00:08:22.916 { 00:08:22.916 "dma_device_id": "system", 00:08:22.916 "dma_device_type": 1 00:08:22.916 }, 00:08:22.916 { 00:08:22.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.917 "dma_device_type": 2 00:08:22.917 } 00:08:22.917 ], 00:08:22.917 "driver_specific": {} 00:08:22.917 } 00:08:22.917 ] 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.917 "name": "Existed_Raid", 00:08:22.917 "uuid": "f099dcf0-0af4-48eb-83a8-62d30b38549b", 00:08:22.917 "strip_size_kb": 64, 00:08:22.917 "state": "online", 00:08:22.917 "raid_level": "raid0", 00:08:22.917 "superblock": true, 00:08:22.917 "num_base_bdevs": 3, 00:08:22.917 "num_base_bdevs_discovered": 3, 00:08:22.917 "num_base_bdevs_operational": 3, 00:08:22.917 "base_bdevs_list": [ 00:08:22.917 { 00:08:22.917 "name": "BaseBdev1", 00:08:22.917 "uuid": "0d19ab5b-b712-41dd-a7d0-a16731431cde", 00:08:22.917 "is_configured": true, 00:08:22.917 "data_offset": 2048, 00:08:22.917 "data_size": 63488 00:08:22.917 }, 00:08:22.917 { 00:08:22.917 "name": "BaseBdev2", 00:08:22.917 "uuid": "182ffaec-fc1a-4a69-ac92-f0e25f96c205", 00:08:22.917 "is_configured": true, 00:08:22.917 "data_offset": 2048, 00:08:22.917 "data_size": 63488 00:08:22.917 }, 00:08:22.917 { 00:08:22.917 "name": "BaseBdev3", 00:08:22.917 "uuid": "b11d6702-d85f-4123-8c9c-b4d5d6cc9382", 00:08:22.917 "is_configured": true, 00:08:22.917 "data_offset": 2048, 00:08:22.917 "data_size": 63488 00:08:22.917 } 00:08:22.917 ] 00:08:22.917 }' 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.917 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.177 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:23.177 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:23.177 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:23.177 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:23.177 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:23.177 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:23.177 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:23.177 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:23.177 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.177 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.177 [2024-11-27 11:46:49.557072] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.436 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.436 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:23.436 "name": "Existed_Raid", 00:08:23.437 "aliases": [ 00:08:23.437 "f099dcf0-0af4-48eb-83a8-62d30b38549b" 00:08:23.437 ], 00:08:23.437 "product_name": "Raid Volume", 00:08:23.437 "block_size": 512, 00:08:23.437 "num_blocks": 190464, 00:08:23.437 "uuid": "f099dcf0-0af4-48eb-83a8-62d30b38549b", 00:08:23.437 "assigned_rate_limits": { 00:08:23.437 "rw_ios_per_sec": 0, 00:08:23.437 "rw_mbytes_per_sec": 0, 00:08:23.437 "r_mbytes_per_sec": 0, 00:08:23.437 "w_mbytes_per_sec": 0 00:08:23.437 }, 00:08:23.437 "claimed": false, 00:08:23.437 "zoned": false, 00:08:23.437 "supported_io_types": { 00:08:23.437 "read": true, 00:08:23.437 "write": true, 00:08:23.437 "unmap": true, 00:08:23.437 "flush": true, 00:08:23.437 "reset": true, 00:08:23.437 "nvme_admin": false, 00:08:23.437 "nvme_io": false, 00:08:23.437 "nvme_io_md": false, 00:08:23.437 "write_zeroes": true, 00:08:23.437 "zcopy": false, 00:08:23.437 "get_zone_info": false, 00:08:23.437 "zone_management": false, 00:08:23.437 "zone_append": false, 00:08:23.437 "compare": false, 00:08:23.437 "compare_and_write": false, 00:08:23.437 "abort": false, 00:08:23.437 "seek_hole": false, 00:08:23.437 "seek_data": false, 00:08:23.437 "copy": false, 00:08:23.437 "nvme_iov_md": false 00:08:23.437 }, 00:08:23.437 "memory_domains": [ 00:08:23.437 { 00:08:23.437 "dma_device_id": "system", 00:08:23.437 "dma_device_type": 1 00:08:23.437 }, 00:08:23.437 { 00:08:23.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.437 "dma_device_type": 2 00:08:23.437 }, 00:08:23.437 { 00:08:23.437 "dma_device_id": "system", 00:08:23.437 "dma_device_type": 1 00:08:23.437 }, 00:08:23.437 { 00:08:23.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.437 "dma_device_type": 2 00:08:23.437 }, 00:08:23.437 { 00:08:23.437 "dma_device_id": "system", 00:08:23.437 "dma_device_type": 1 00:08:23.437 }, 00:08:23.437 { 00:08:23.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.437 "dma_device_type": 2 00:08:23.437 } 00:08:23.437 ], 00:08:23.437 "driver_specific": { 00:08:23.437 "raid": { 00:08:23.437 "uuid": "f099dcf0-0af4-48eb-83a8-62d30b38549b", 00:08:23.437 "strip_size_kb": 64, 00:08:23.437 "state": "online", 00:08:23.437 "raid_level": "raid0", 00:08:23.437 "superblock": true, 00:08:23.437 "num_base_bdevs": 3, 00:08:23.437 "num_base_bdevs_discovered": 3, 00:08:23.437 "num_base_bdevs_operational": 3, 00:08:23.437 "base_bdevs_list": [ 00:08:23.437 { 00:08:23.437 "name": "BaseBdev1", 00:08:23.437 "uuid": "0d19ab5b-b712-41dd-a7d0-a16731431cde", 00:08:23.437 "is_configured": true, 00:08:23.437 "data_offset": 2048, 00:08:23.437 "data_size": 63488 00:08:23.437 }, 00:08:23.437 { 00:08:23.437 "name": "BaseBdev2", 00:08:23.437 "uuid": "182ffaec-fc1a-4a69-ac92-f0e25f96c205", 00:08:23.437 "is_configured": true, 00:08:23.437 "data_offset": 2048, 00:08:23.437 "data_size": 63488 00:08:23.437 }, 00:08:23.437 { 00:08:23.437 "name": "BaseBdev3", 00:08:23.437 "uuid": "b11d6702-d85f-4123-8c9c-b4d5d6cc9382", 00:08:23.437 "is_configured": true, 00:08:23.437 "data_offset": 2048, 00:08:23.437 "data_size": 63488 00:08:23.437 } 00:08:23.437 ] 00:08:23.437 } 00:08:23.437 } 00:08:23.437 }' 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:23.437 BaseBdev2 00:08:23.437 BaseBdev3' 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.437 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.699 [2024-11-27 11:46:49.864247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:23.699 [2024-11-27 11:46:49.864341] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.699 [2024-11-27 11:46:49.864433] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.699 11:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.699 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.699 "name": "Existed_Raid", 00:08:23.699 "uuid": "f099dcf0-0af4-48eb-83a8-62d30b38549b", 00:08:23.699 "strip_size_kb": 64, 00:08:23.699 "state": "offline", 00:08:23.699 "raid_level": "raid0", 00:08:23.699 "superblock": true, 00:08:23.699 "num_base_bdevs": 3, 00:08:23.699 "num_base_bdevs_discovered": 2, 00:08:23.699 "num_base_bdevs_operational": 2, 00:08:23.699 "base_bdevs_list": [ 00:08:23.699 { 00:08:23.699 "name": null, 00:08:23.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.699 "is_configured": false, 00:08:23.699 "data_offset": 0, 00:08:23.699 "data_size": 63488 00:08:23.699 }, 00:08:23.699 { 00:08:23.699 "name": "BaseBdev2", 00:08:23.699 "uuid": "182ffaec-fc1a-4a69-ac92-f0e25f96c205", 00:08:23.699 "is_configured": true, 00:08:23.699 "data_offset": 2048, 00:08:23.699 "data_size": 63488 00:08:23.699 }, 00:08:23.699 { 00:08:23.699 "name": "BaseBdev3", 00:08:23.699 "uuid": "b11d6702-d85f-4123-8c9c-b4d5d6cc9382", 00:08:23.699 "is_configured": true, 00:08:23.699 "data_offset": 2048, 00:08:23.699 "data_size": 63488 00:08:23.699 } 00:08:23.699 ] 00:08:23.699 }' 00:08:23.699 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.699 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.268 [2024-11-27 11:46:50.475789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.268 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.268 [2024-11-27 11:46:50.628676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:24.268 [2024-11-27 11:46:50.628728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.528 BaseBdev2 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:24.528 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.529 [ 00:08:24.529 { 00:08:24.529 "name": "BaseBdev2", 00:08:24.529 "aliases": [ 00:08:24.529 "e6c9a0ce-dc31-4076-8552-c9c68bdedd72" 00:08:24.529 ], 00:08:24.529 "product_name": "Malloc disk", 00:08:24.529 "block_size": 512, 00:08:24.529 "num_blocks": 65536, 00:08:24.529 "uuid": "e6c9a0ce-dc31-4076-8552-c9c68bdedd72", 00:08:24.529 "assigned_rate_limits": { 00:08:24.529 "rw_ios_per_sec": 0, 00:08:24.529 "rw_mbytes_per_sec": 0, 00:08:24.529 "r_mbytes_per_sec": 0, 00:08:24.529 "w_mbytes_per_sec": 0 00:08:24.529 }, 00:08:24.529 "claimed": false, 00:08:24.529 "zoned": false, 00:08:24.529 "supported_io_types": { 00:08:24.529 "read": true, 00:08:24.529 "write": true, 00:08:24.529 "unmap": true, 00:08:24.529 "flush": true, 00:08:24.529 "reset": true, 00:08:24.529 "nvme_admin": false, 00:08:24.529 "nvme_io": false, 00:08:24.529 "nvme_io_md": false, 00:08:24.529 "write_zeroes": true, 00:08:24.529 "zcopy": true, 00:08:24.529 "get_zone_info": false, 00:08:24.529 "zone_management": false, 00:08:24.529 "zone_append": false, 00:08:24.529 "compare": false, 00:08:24.529 "compare_and_write": false, 00:08:24.529 "abort": true, 00:08:24.529 "seek_hole": false, 00:08:24.529 "seek_data": false, 00:08:24.529 "copy": true, 00:08:24.529 "nvme_iov_md": false 00:08:24.529 }, 00:08:24.529 "memory_domains": [ 00:08:24.529 { 00:08:24.529 "dma_device_id": "system", 00:08:24.529 "dma_device_type": 1 00:08:24.529 }, 00:08:24.529 { 00:08:24.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.529 "dma_device_type": 2 00:08:24.529 } 00:08:24.529 ], 00:08:24.529 "driver_specific": {} 00:08:24.529 } 00:08:24.529 ] 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.529 BaseBdev3 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.529 [ 00:08:24.529 { 00:08:24.529 "name": "BaseBdev3", 00:08:24.529 "aliases": [ 00:08:24.529 "19a53a7f-e403-4633-bc97-79cc18867ce2" 00:08:24.529 ], 00:08:24.529 "product_name": "Malloc disk", 00:08:24.529 "block_size": 512, 00:08:24.529 "num_blocks": 65536, 00:08:24.529 "uuid": "19a53a7f-e403-4633-bc97-79cc18867ce2", 00:08:24.529 "assigned_rate_limits": { 00:08:24.529 "rw_ios_per_sec": 0, 00:08:24.529 "rw_mbytes_per_sec": 0, 00:08:24.529 "r_mbytes_per_sec": 0, 00:08:24.529 "w_mbytes_per_sec": 0 00:08:24.529 }, 00:08:24.529 "claimed": false, 00:08:24.529 "zoned": false, 00:08:24.529 "supported_io_types": { 00:08:24.529 "read": true, 00:08:24.529 "write": true, 00:08:24.529 "unmap": true, 00:08:24.529 "flush": true, 00:08:24.529 "reset": true, 00:08:24.529 "nvme_admin": false, 00:08:24.529 "nvme_io": false, 00:08:24.529 "nvme_io_md": false, 00:08:24.529 "write_zeroes": true, 00:08:24.529 "zcopy": true, 00:08:24.529 "get_zone_info": false, 00:08:24.529 "zone_management": false, 00:08:24.529 "zone_append": false, 00:08:24.529 "compare": false, 00:08:24.529 "compare_and_write": false, 00:08:24.529 "abort": true, 00:08:24.529 "seek_hole": false, 00:08:24.529 "seek_data": false, 00:08:24.529 "copy": true, 00:08:24.529 "nvme_iov_md": false 00:08:24.529 }, 00:08:24.529 "memory_domains": [ 00:08:24.529 { 00:08:24.529 "dma_device_id": "system", 00:08:24.529 "dma_device_type": 1 00:08:24.529 }, 00:08:24.529 { 00:08:24.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.529 "dma_device_type": 2 00:08:24.529 } 00:08:24.529 ], 00:08:24.529 "driver_specific": {} 00:08:24.529 } 00:08:24.529 ] 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.529 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.529 [2024-11-27 11:46:50.899705] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:24.530 [2024-11-27 11:46:50.899807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:24.530 [2024-11-27 11:46:50.899892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:24.530 [2024-11-27 11:46:50.901775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:24.530 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.530 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.530 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.530 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.530 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.530 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.530 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.530 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.530 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.530 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.530 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.530 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.530 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.530 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.530 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.789 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.789 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.789 "name": "Existed_Raid", 00:08:24.789 "uuid": "a208906a-531a-4985-bd74-f9c29867c272", 00:08:24.789 "strip_size_kb": 64, 00:08:24.789 "state": "configuring", 00:08:24.789 "raid_level": "raid0", 00:08:24.789 "superblock": true, 00:08:24.789 "num_base_bdevs": 3, 00:08:24.789 "num_base_bdevs_discovered": 2, 00:08:24.789 "num_base_bdevs_operational": 3, 00:08:24.789 "base_bdevs_list": [ 00:08:24.789 { 00:08:24.789 "name": "BaseBdev1", 00:08:24.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.789 "is_configured": false, 00:08:24.789 "data_offset": 0, 00:08:24.789 "data_size": 0 00:08:24.789 }, 00:08:24.789 { 00:08:24.789 "name": "BaseBdev2", 00:08:24.789 "uuid": "e6c9a0ce-dc31-4076-8552-c9c68bdedd72", 00:08:24.789 "is_configured": true, 00:08:24.789 "data_offset": 2048, 00:08:24.789 "data_size": 63488 00:08:24.789 }, 00:08:24.789 { 00:08:24.789 "name": "BaseBdev3", 00:08:24.789 "uuid": "19a53a7f-e403-4633-bc97-79cc18867ce2", 00:08:24.789 "is_configured": true, 00:08:24.789 "data_offset": 2048, 00:08:24.789 "data_size": 63488 00:08:24.789 } 00:08:24.789 ] 00:08:24.789 }' 00:08:24.789 11:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.789 11:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.048 [2024-11-27 11:46:51.347011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.048 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.049 "name": "Existed_Raid", 00:08:25.049 "uuid": "a208906a-531a-4985-bd74-f9c29867c272", 00:08:25.049 "strip_size_kb": 64, 00:08:25.049 "state": "configuring", 00:08:25.049 "raid_level": "raid0", 00:08:25.049 "superblock": true, 00:08:25.049 "num_base_bdevs": 3, 00:08:25.049 "num_base_bdevs_discovered": 1, 00:08:25.049 "num_base_bdevs_operational": 3, 00:08:25.049 "base_bdevs_list": [ 00:08:25.049 { 00:08:25.049 "name": "BaseBdev1", 00:08:25.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.049 "is_configured": false, 00:08:25.049 "data_offset": 0, 00:08:25.049 "data_size": 0 00:08:25.049 }, 00:08:25.049 { 00:08:25.049 "name": null, 00:08:25.049 "uuid": "e6c9a0ce-dc31-4076-8552-c9c68bdedd72", 00:08:25.049 "is_configured": false, 00:08:25.049 "data_offset": 0, 00:08:25.049 "data_size": 63488 00:08:25.049 }, 00:08:25.049 { 00:08:25.049 "name": "BaseBdev3", 00:08:25.049 "uuid": "19a53a7f-e403-4633-bc97-79cc18867ce2", 00:08:25.049 "is_configured": true, 00:08:25.049 "data_offset": 2048, 00:08:25.049 "data_size": 63488 00:08:25.049 } 00:08:25.049 ] 00:08:25.049 }' 00:08:25.049 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.049 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.619 [2024-11-27 11:46:51.898744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.619 BaseBdev1 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.619 [ 00:08:25.619 { 00:08:25.619 "name": "BaseBdev1", 00:08:25.619 "aliases": [ 00:08:25.619 "a03f793a-4d53-449f-8ae8-c2590f77f051" 00:08:25.619 ], 00:08:25.619 "product_name": "Malloc disk", 00:08:25.619 "block_size": 512, 00:08:25.619 "num_blocks": 65536, 00:08:25.619 "uuid": "a03f793a-4d53-449f-8ae8-c2590f77f051", 00:08:25.619 "assigned_rate_limits": { 00:08:25.619 "rw_ios_per_sec": 0, 00:08:25.619 "rw_mbytes_per_sec": 0, 00:08:25.619 "r_mbytes_per_sec": 0, 00:08:25.619 "w_mbytes_per_sec": 0 00:08:25.619 }, 00:08:25.619 "claimed": true, 00:08:25.619 "claim_type": "exclusive_write", 00:08:25.619 "zoned": false, 00:08:25.619 "supported_io_types": { 00:08:25.619 "read": true, 00:08:25.619 "write": true, 00:08:25.619 "unmap": true, 00:08:25.619 "flush": true, 00:08:25.619 "reset": true, 00:08:25.619 "nvme_admin": false, 00:08:25.619 "nvme_io": false, 00:08:25.619 "nvme_io_md": false, 00:08:25.619 "write_zeroes": true, 00:08:25.619 "zcopy": true, 00:08:25.619 "get_zone_info": false, 00:08:25.619 "zone_management": false, 00:08:25.619 "zone_append": false, 00:08:25.619 "compare": false, 00:08:25.619 "compare_and_write": false, 00:08:25.619 "abort": true, 00:08:25.619 "seek_hole": false, 00:08:25.619 "seek_data": false, 00:08:25.619 "copy": true, 00:08:25.619 "nvme_iov_md": false 00:08:25.619 }, 00:08:25.619 "memory_domains": [ 00:08:25.619 { 00:08:25.619 "dma_device_id": "system", 00:08:25.619 "dma_device_type": 1 00:08:25.619 }, 00:08:25.619 { 00:08:25.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.619 "dma_device_type": 2 00:08:25.619 } 00:08:25.619 ], 00:08:25.619 "driver_specific": {} 00:08:25.619 } 00:08:25.619 ] 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.619 "name": "Existed_Raid", 00:08:25.619 "uuid": "a208906a-531a-4985-bd74-f9c29867c272", 00:08:25.619 "strip_size_kb": 64, 00:08:25.619 "state": "configuring", 00:08:25.619 "raid_level": "raid0", 00:08:25.619 "superblock": true, 00:08:25.619 "num_base_bdevs": 3, 00:08:25.619 "num_base_bdevs_discovered": 2, 00:08:25.619 "num_base_bdevs_operational": 3, 00:08:25.619 "base_bdevs_list": [ 00:08:25.619 { 00:08:25.619 "name": "BaseBdev1", 00:08:25.619 "uuid": "a03f793a-4d53-449f-8ae8-c2590f77f051", 00:08:25.619 "is_configured": true, 00:08:25.619 "data_offset": 2048, 00:08:25.619 "data_size": 63488 00:08:25.619 }, 00:08:25.619 { 00:08:25.619 "name": null, 00:08:25.619 "uuid": "e6c9a0ce-dc31-4076-8552-c9c68bdedd72", 00:08:25.619 "is_configured": false, 00:08:25.619 "data_offset": 0, 00:08:25.619 "data_size": 63488 00:08:25.619 }, 00:08:25.619 { 00:08:25.619 "name": "BaseBdev3", 00:08:25.619 "uuid": "19a53a7f-e403-4633-bc97-79cc18867ce2", 00:08:25.619 "is_configured": true, 00:08:25.619 "data_offset": 2048, 00:08:25.619 "data_size": 63488 00:08:25.619 } 00:08:25.619 ] 00:08:25.619 }' 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.619 11:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.190 [2024-11-27 11:46:52.461860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.190 "name": "Existed_Raid", 00:08:26.190 "uuid": "a208906a-531a-4985-bd74-f9c29867c272", 00:08:26.190 "strip_size_kb": 64, 00:08:26.190 "state": "configuring", 00:08:26.190 "raid_level": "raid0", 00:08:26.190 "superblock": true, 00:08:26.190 "num_base_bdevs": 3, 00:08:26.190 "num_base_bdevs_discovered": 1, 00:08:26.190 "num_base_bdevs_operational": 3, 00:08:26.190 "base_bdevs_list": [ 00:08:26.190 { 00:08:26.190 "name": "BaseBdev1", 00:08:26.190 "uuid": "a03f793a-4d53-449f-8ae8-c2590f77f051", 00:08:26.190 "is_configured": true, 00:08:26.190 "data_offset": 2048, 00:08:26.190 "data_size": 63488 00:08:26.190 }, 00:08:26.190 { 00:08:26.190 "name": null, 00:08:26.190 "uuid": "e6c9a0ce-dc31-4076-8552-c9c68bdedd72", 00:08:26.190 "is_configured": false, 00:08:26.190 "data_offset": 0, 00:08:26.190 "data_size": 63488 00:08:26.190 }, 00:08:26.190 { 00:08:26.190 "name": null, 00:08:26.190 "uuid": "19a53a7f-e403-4633-bc97-79cc18867ce2", 00:08:26.190 "is_configured": false, 00:08:26.190 "data_offset": 0, 00:08:26.190 "data_size": 63488 00:08:26.190 } 00:08:26.190 ] 00:08:26.190 }' 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.190 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.758 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.758 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.759 [2024-11-27 11:46:52.989018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.759 11:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.759 11:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.759 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.759 11:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.759 11:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.759 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.759 "name": "Existed_Raid", 00:08:26.759 "uuid": "a208906a-531a-4985-bd74-f9c29867c272", 00:08:26.759 "strip_size_kb": 64, 00:08:26.759 "state": "configuring", 00:08:26.759 "raid_level": "raid0", 00:08:26.759 "superblock": true, 00:08:26.759 "num_base_bdevs": 3, 00:08:26.759 "num_base_bdevs_discovered": 2, 00:08:26.759 "num_base_bdevs_operational": 3, 00:08:26.759 "base_bdevs_list": [ 00:08:26.759 { 00:08:26.759 "name": "BaseBdev1", 00:08:26.759 "uuid": "a03f793a-4d53-449f-8ae8-c2590f77f051", 00:08:26.759 "is_configured": true, 00:08:26.759 "data_offset": 2048, 00:08:26.759 "data_size": 63488 00:08:26.759 }, 00:08:26.759 { 00:08:26.759 "name": null, 00:08:26.759 "uuid": "e6c9a0ce-dc31-4076-8552-c9c68bdedd72", 00:08:26.759 "is_configured": false, 00:08:26.759 "data_offset": 0, 00:08:26.759 "data_size": 63488 00:08:26.759 }, 00:08:26.759 { 00:08:26.759 "name": "BaseBdev3", 00:08:26.759 "uuid": "19a53a7f-e403-4633-bc97-79cc18867ce2", 00:08:26.759 "is_configured": true, 00:08:26.759 "data_offset": 2048, 00:08:26.759 "data_size": 63488 00:08:26.759 } 00:08:26.759 ] 00:08:26.759 }' 00:08:26.759 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.759 11:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.329 [2024-11-27 11:46:53.504142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.329 "name": "Existed_Raid", 00:08:27.329 "uuid": "a208906a-531a-4985-bd74-f9c29867c272", 00:08:27.329 "strip_size_kb": 64, 00:08:27.329 "state": "configuring", 00:08:27.329 "raid_level": "raid0", 00:08:27.329 "superblock": true, 00:08:27.329 "num_base_bdevs": 3, 00:08:27.329 "num_base_bdevs_discovered": 1, 00:08:27.329 "num_base_bdevs_operational": 3, 00:08:27.329 "base_bdevs_list": [ 00:08:27.329 { 00:08:27.329 "name": null, 00:08:27.329 "uuid": "a03f793a-4d53-449f-8ae8-c2590f77f051", 00:08:27.329 "is_configured": false, 00:08:27.329 "data_offset": 0, 00:08:27.329 "data_size": 63488 00:08:27.329 }, 00:08:27.329 { 00:08:27.329 "name": null, 00:08:27.329 "uuid": "e6c9a0ce-dc31-4076-8552-c9c68bdedd72", 00:08:27.329 "is_configured": false, 00:08:27.329 "data_offset": 0, 00:08:27.329 "data_size": 63488 00:08:27.329 }, 00:08:27.329 { 00:08:27.329 "name": "BaseBdev3", 00:08:27.329 "uuid": "19a53a7f-e403-4633-bc97-79cc18867ce2", 00:08:27.329 "is_configured": true, 00:08:27.329 "data_offset": 2048, 00:08:27.329 "data_size": 63488 00:08:27.329 } 00:08:27.329 ] 00:08:27.329 }' 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.329 11:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.899 [2024-11-27 11:46:54.117944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.899 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.900 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.900 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.900 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.900 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.900 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.900 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.900 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.900 "name": "Existed_Raid", 00:08:27.900 "uuid": "a208906a-531a-4985-bd74-f9c29867c272", 00:08:27.900 "strip_size_kb": 64, 00:08:27.900 "state": "configuring", 00:08:27.900 "raid_level": "raid0", 00:08:27.900 "superblock": true, 00:08:27.900 "num_base_bdevs": 3, 00:08:27.900 "num_base_bdevs_discovered": 2, 00:08:27.900 "num_base_bdevs_operational": 3, 00:08:27.900 "base_bdevs_list": [ 00:08:27.900 { 00:08:27.900 "name": null, 00:08:27.900 "uuid": "a03f793a-4d53-449f-8ae8-c2590f77f051", 00:08:27.900 "is_configured": false, 00:08:27.900 "data_offset": 0, 00:08:27.900 "data_size": 63488 00:08:27.900 }, 00:08:27.900 { 00:08:27.900 "name": "BaseBdev2", 00:08:27.900 "uuid": "e6c9a0ce-dc31-4076-8552-c9c68bdedd72", 00:08:27.900 "is_configured": true, 00:08:27.900 "data_offset": 2048, 00:08:27.900 "data_size": 63488 00:08:27.900 }, 00:08:27.900 { 00:08:27.900 "name": "BaseBdev3", 00:08:27.900 "uuid": "19a53a7f-e403-4633-bc97-79cc18867ce2", 00:08:27.900 "is_configured": true, 00:08:27.900 "data_offset": 2048, 00:08:27.900 "data_size": 63488 00:08:27.900 } 00:08:27.900 ] 00:08:27.900 }' 00:08:27.900 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.900 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a03f793a-4d53-449f-8ae8-c2590f77f051 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.469 [2024-11-27 11:46:54.687440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:28.469 [2024-11-27 11:46:54.687840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:28.469 [2024-11-27 11:46:54.687922] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:28.469 [2024-11-27 11:46:54.688242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:28.469 NewBaseBdev 00:08:28.469 [2024-11-27 11:46:54.688458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:28.469 [2024-11-27 11:46:54.688474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:28.469 [2024-11-27 11:46:54.688628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.469 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.469 [ 00:08:28.469 { 00:08:28.469 "name": "NewBaseBdev", 00:08:28.469 "aliases": [ 00:08:28.469 "a03f793a-4d53-449f-8ae8-c2590f77f051" 00:08:28.469 ], 00:08:28.469 "product_name": "Malloc disk", 00:08:28.469 "block_size": 512, 00:08:28.469 "num_blocks": 65536, 00:08:28.469 "uuid": "a03f793a-4d53-449f-8ae8-c2590f77f051", 00:08:28.469 "assigned_rate_limits": { 00:08:28.469 "rw_ios_per_sec": 0, 00:08:28.469 "rw_mbytes_per_sec": 0, 00:08:28.469 "r_mbytes_per_sec": 0, 00:08:28.469 "w_mbytes_per_sec": 0 00:08:28.469 }, 00:08:28.469 "claimed": true, 00:08:28.469 "claim_type": "exclusive_write", 00:08:28.469 "zoned": false, 00:08:28.469 "supported_io_types": { 00:08:28.470 "read": true, 00:08:28.470 "write": true, 00:08:28.470 "unmap": true, 00:08:28.470 "flush": true, 00:08:28.470 "reset": true, 00:08:28.470 "nvme_admin": false, 00:08:28.470 "nvme_io": false, 00:08:28.470 "nvme_io_md": false, 00:08:28.470 "write_zeroes": true, 00:08:28.470 "zcopy": true, 00:08:28.470 "get_zone_info": false, 00:08:28.470 "zone_management": false, 00:08:28.470 "zone_append": false, 00:08:28.470 "compare": false, 00:08:28.470 "compare_and_write": false, 00:08:28.470 "abort": true, 00:08:28.470 "seek_hole": false, 00:08:28.470 "seek_data": false, 00:08:28.470 "copy": true, 00:08:28.470 "nvme_iov_md": false 00:08:28.470 }, 00:08:28.470 "memory_domains": [ 00:08:28.470 { 00:08:28.470 "dma_device_id": "system", 00:08:28.470 "dma_device_type": 1 00:08:28.470 }, 00:08:28.470 { 00:08:28.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.470 "dma_device_type": 2 00:08:28.470 } 00:08:28.470 ], 00:08:28.470 "driver_specific": {} 00:08:28.470 } 00:08:28.470 ] 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.470 "name": "Existed_Raid", 00:08:28.470 "uuid": "a208906a-531a-4985-bd74-f9c29867c272", 00:08:28.470 "strip_size_kb": 64, 00:08:28.470 "state": "online", 00:08:28.470 "raid_level": "raid0", 00:08:28.470 "superblock": true, 00:08:28.470 "num_base_bdevs": 3, 00:08:28.470 "num_base_bdevs_discovered": 3, 00:08:28.470 "num_base_bdevs_operational": 3, 00:08:28.470 "base_bdevs_list": [ 00:08:28.470 { 00:08:28.470 "name": "NewBaseBdev", 00:08:28.470 "uuid": "a03f793a-4d53-449f-8ae8-c2590f77f051", 00:08:28.470 "is_configured": true, 00:08:28.470 "data_offset": 2048, 00:08:28.470 "data_size": 63488 00:08:28.470 }, 00:08:28.470 { 00:08:28.470 "name": "BaseBdev2", 00:08:28.470 "uuid": "e6c9a0ce-dc31-4076-8552-c9c68bdedd72", 00:08:28.470 "is_configured": true, 00:08:28.470 "data_offset": 2048, 00:08:28.470 "data_size": 63488 00:08:28.470 }, 00:08:28.470 { 00:08:28.470 "name": "BaseBdev3", 00:08:28.470 "uuid": "19a53a7f-e403-4633-bc97-79cc18867ce2", 00:08:28.470 "is_configured": true, 00:08:28.470 "data_offset": 2048, 00:08:28.470 "data_size": 63488 00:08:28.470 } 00:08:28.470 ] 00:08:28.470 }' 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.470 11:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.040 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:29.040 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:29.040 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:29.040 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:29.040 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:29.040 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:29.040 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.041 [2024-11-27 11:46:55.190957] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:29.041 "name": "Existed_Raid", 00:08:29.041 "aliases": [ 00:08:29.041 "a208906a-531a-4985-bd74-f9c29867c272" 00:08:29.041 ], 00:08:29.041 "product_name": "Raid Volume", 00:08:29.041 "block_size": 512, 00:08:29.041 "num_blocks": 190464, 00:08:29.041 "uuid": "a208906a-531a-4985-bd74-f9c29867c272", 00:08:29.041 "assigned_rate_limits": { 00:08:29.041 "rw_ios_per_sec": 0, 00:08:29.041 "rw_mbytes_per_sec": 0, 00:08:29.041 "r_mbytes_per_sec": 0, 00:08:29.041 "w_mbytes_per_sec": 0 00:08:29.041 }, 00:08:29.041 "claimed": false, 00:08:29.041 "zoned": false, 00:08:29.041 "supported_io_types": { 00:08:29.041 "read": true, 00:08:29.041 "write": true, 00:08:29.041 "unmap": true, 00:08:29.041 "flush": true, 00:08:29.041 "reset": true, 00:08:29.041 "nvme_admin": false, 00:08:29.041 "nvme_io": false, 00:08:29.041 "nvme_io_md": false, 00:08:29.041 "write_zeroes": true, 00:08:29.041 "zcopy": false, 00:08:29.041 "get_zone_info": false, 00:08:29.041 "zone_management": false, 00:08:29.041 "zone_append": false, 00:08:29.041 "compare": false, 00:08:29.041 "compare_and_write": false, 00:08:29.041 "abort": false, 00:08:29.041 "seek_hole": false, 00:08:29.041 "seek_data": false, 00:08:29.041 "copy": false, 00:08:29.041 "nvme_iov_md": false 00:08:29.041 }, 00:08:29.041 "memory_domains": [ 00:08:29.041 { 00:08:29.041 "dma_device_id": "system", 00:08:29.041 "dma_device_type": 1 00:08:29.041 }, 00:08:29.041 { 00:08:29.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.041 "dma_device_type": 2 00:08:29.041 }, 00:08:29.041 { 00:08:29.041 "dma_device_id": "system", 00:08:29.041 "dma_device_type": 1 00:08:29.041 }, 00:08:29.041 { 00:08:29.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.041 "dma_device_type": 2 00:08:29.041 }, 00:08:29.041 { 00:08:29.041 "dma_device_id": "system", 00:08:29.041 "dma_device_type": 1 00:08:29.041 }, 00:08:29.041 { 00:08:29.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.041 "dma_device_type": 2 00:08:29.041 } 00:08:29.041 ], 00:08:29.041 "driver_specific": { 00:08:29.041 "raid": { 00:08:29.041 "uuid": "a208906a-531a-4985-bd74-f9c29867c272", 00:08:29.041 "strip_size_kb": 64, 00:08:29.041 "state": "online", 00:08:29.041 "raid_level": "raid0", 00:08:29.041 "superblock": true, 00:08:29.041 "num_base_bdevs": 3, 00:08:29.041 "num_base_bdevs_discovered": 3, 00:08:29.041 "num_base_bdevs_operational": 3, 00:08:29.041 "base_bdevs_list": [ 00:08:29.041 { 00:08:29.041 "name": "NewBaseBdev", 00:08:29.041 "uuid": "a03f793a-4d53-449f-8ae8-c2590f77f051", 00:08:29.041 "is_configured": true, 00:08:29.041 "data_offset": 2048, 00:08:29.041 "data_size": 63488 00:08:29.041 }, 00:08:29.041 { 00:08:29.041 "name": "BaseBdev2", 00:08:29.041 "uuid": "e6c9a0ce-dc31-4076-8552-c9c68bdedd72", 00:08:29.041 "is_configured": true, 00:08:29.041 "data_offset": 2048, 00:08:29.041 "data_size": 63488 00:08:29.041 }, 00:08:29.041 { 00:08:29.041 "name": "BaseBdev3", 00:08:29.041 "uuid": "19a53a7f-e403-4633-bc97-79cc18867ce2", 00:08:29.041 "is_configured": true, 00:08:29.041 "data_offset": 2048, 00:08:29.041 "data_size": 63488 00:08:29.041 } 00:08:29.041 ] 00:08:29.041 } 00:08:29.041 } 00:08:29.041 }' 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:29.041 BaseBdev2 00:08:29.041 BaseBdev3' 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.041 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.302 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.302 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.302 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.302 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:29.302 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.302 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.302 [2024-11-27 11:46:55.470186] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:29.302 [2024-11-27 11:46:55.470274] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.302 [2024-11-27 11:46:55.470409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.302 [2024-11-27 11:46:55.470522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.302 [2024-11-27 11:46:55.470588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:29.302 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.302 11:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64398 00:08:29.302 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64398 ']' 00:08:29.302 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64398 00:08:29.302 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:29.302 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.302 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64398 00:08:29.302 killing process with pid 64398 00:08:29.302 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.302 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.302 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64398' 00:08:29.302 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64398 00:08:29.302 [2024-11-27 11:46:55.517773] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.302 11:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64398 00:08:29.562 [2024-11-27 11:46:55.852764] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.944 11:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:30.944 00:08:30.944 real 0m11.011s 00:08:30.944 user 0m17.621s 00:08:30.944 sys 0m1.855s 00:08:30.944 11:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.944 11:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.944 ************************************ 00:08:30.944 END TEST raid_state_function_test_sb 00:08:30.944 ************************************ 00:08:30.945 11:46:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:30.945 11:46:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:30.945 11:46:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.945 11:46:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.945 ************************************ 00:08:30.945 START TEST raid_superblock_test 00:08:30.945 ************************************ 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65024 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65024 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65024 ']' 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.945 11:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.945 [2024-11-27 11:46:57.176052] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:08:30.945 [2024-11-27 11:46:57.176273] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65024 ] 00:08:31.204 [2024-11-27 11:46:57.351965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.204 [2024-11-27 11:46:57.479858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.463 [2024-11-27 11:46:57.703043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.463 [2024-11-27 11:46:57.703188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.722 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.722 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:31.722 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:31.722 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:31.722 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:31.722 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:31.722 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:31.722 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:31.722 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:31.722 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:31.722 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:31.722 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.722 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.981 malloc1 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.981 [2024-11-27 11:46:58.127246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:31.981 [2024-11-27 11:46:58.127382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.981 [2024-11-27 11:46:58.127433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:31.981 [2024-11-27 11:46:58.127473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.981 [2024-11-27 11:46:58.129992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.981 [2024-11-27 11:46:58.130067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:31.981 pt1 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.981 malloc2 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.981 [2024-11-27 11:46:58.193051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:31.981 [2024-11-27 11:46:58.193164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.981 [2024-11-27 11:46:58.193225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:31.981 [2024-11-27 11:46:58.193254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.981 [2024-11-27 11:46:58.195594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.981 pt2 00:08:31.981 [2024-11-27 11:46:58.195673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.981 malloc3 00:08:31.981 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.982 [2024-11-27 11:46:58.264309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:31.982 [2024-11-27 11:46:58.264444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.982 [2024-11-27 11:46:58.264502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:31.982 [2024-11-27 11:46:58.264538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.982 [2024-11-27 11:46:58.266993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.982 [2024-11-27 11:46:58.267069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:31.982 pt3 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.982 [2024-11-27 11:46:58.276351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:31.982 [2024-11-27 11:46:58.278433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:31.982 [2024-11-27 11:46:58.278555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:31.982 [2024-11-27 11:46:58.278754] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:31.982 [2024-11-27 11:46:58.278803] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:31.982 [2024-11-27 11:46:58.279185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:31.982 [2024-11-27 11:46:58.279439] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:31.982 [2024-11-27 11:46:58.279484] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:31.982 [2024-11-27 11:46:58.279737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.982 "name": "raid_bdev1", 00:08:31.982 "uuid": "3c046302-baf0-4899-86c8-97b7a5e7420f", 00:08:31.982 "strip_size_kb": 64, 00:08:31.982 "state": "online", 00:08:31.982 "raid_level": "raid0", 00:08:31.982 "superblock": true, 00:08:31.982 "num_base_bdevs": 3, 00:08:31.982 "num_base_bdevs_discovered": 3, 00:08:31.982 "num_base_bdevs_operational": 3, 00:08:31.982 "base_bdevs_list": [ 00:08:31.982 { 00:08:31.982 "name": "pt1", 00:08:31.982 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:31.982 "is_configured": true, 00:08:31.982 "data_offset": 2048, 00:08:31.982 "data_size": 63488 00:08:31.982 }, 00:08:31.982 { 00:08:31.982 "name": "pt2", 00:08:31.982 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:31.982 "is_configured": true, 00:08:31.982 "data_offset": 2048, 00:08:31.982 "data_size": 63488 00:08:31.982 }, 00:08:31.982 { 00:08:31.982 "name": "pt3", 00:08:31.982 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:31.982 "is_configured": true, 00:08:31.982 "data_offset": 2048, 00:08:31.982 "data_size": 63488 00:08:31.982 } 00:08:31.982 ] 00:08:31.982 }' 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.982 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.551 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:32.551 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:32.551 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:32.551 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:32.551 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:32.551 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:32.551 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:32.551 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.551 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:32.551 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.551 [2024-11-27 11:46:58.800042] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.551 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.551 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:32.551 "name": "raid_bdev1", 00:08:32.551 "aliases": [ 00:08:32.551 "3c046302-baf0-4899-86c8-97b7a5e7420f" 00:08:32.551 ], 00:08:32.551 "product_name": "Raid Volume", 00:08:32.551 "block_size": 512, 00:08:32.551 "num_blocks": 190464, 00:08:32.551 "uuid": "3c046302-baf0-4899-86c8-97b7a5e7420f", 00:08:32.551 "assigned_rate_limits": { 00:08:32.551 "rw_ios_per_sec": 0, 00:08:32.551 "rw_mbytes_per_sec": 0, 00:08:32.551 "r_mbytes_per_sec": 0, 00:08:32.551 "w_mbytes_per_sec": 0 00:08:32.551 }, 00:08:32.551 "claimed": false, 00:08:32.551 "zoned": false, 00:08:32.551 "supported_io_types": { 00:08:32.551 "read": true, 00:08:32.551 "write": true, 00:08:32.551 "unmap": true, 00:08:32.551 "flush": true, 00:08:32.551 "reset": true, 00:08:32.551 "nvme_admin": false, 00:08:32.551 "nvme_io": false, 00:08:32.551 "nvme_io_md": false, 00:08:32.551 "write_zeroes": true, 00:08:32.551 "zcopy": false, 00:08:32.551 "get_zone_info": false, 00:08:32.551 "zone_management": false, 00:08:32.551 "zone_append": false, 00:08:32.551 "compare": false, 00:08:32.551 "compare_and_write": false, 00:08:32.551 "abort": false, 00:08:32.551 "seek_hole": false, 00:08:32.551 "seek_data": false, 00:08:32.551 "copy": false, 00:08:32.551 "nvme_iov_md": false 00:08:32.551 }, 00:08:32.551 "memory_domains": [ 00:08:32.551 { 00:08:32.551 "dma_device_id": "system", 00:08:32.551 "dma_device_type": 1 00:08:32.551 }, 00:08:32.551 { 00:08:32.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.551 "dma_device_type": 2 00:08:32.551 }, 00:08:32.551 { 00:08:32.551 "dma_device_id": "system", 00:08:32.551 "dma_device_type": 1 00:08:32.551 }, 00:08:32.551 { 00:08:32.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.551 "dma_device_type": 2 00:08:32.551 }, 00:08:32.551 { 00:08:32.551 "dma_device_id": "system", 00:08:32.551 "dma_device_type": 1 00:08:32.551 }, 00:08:32.551 { 00:08:32.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.551 "dma_device_type": 2 00:08:32.551 } 00:08:32.551 ], 00:08:32.551 "driver_specific": { 00:08:32.551 "raid": { 00:08:32.551 "uuid": "3c046302-baf0-4899-86c8-97b7a5e7420f", 00:08:32.551 "strip_size_kb": 64, 00:08:32.551 "state": "online", 00:08:32.551 "raid_level": "raid0", 00:08:32.551 "superblock": true, 00:08:32.551 "num_base_bdevs": 3, 00:08:32.551 "num_base_bdevs_discovered": 3, 00:08:32.551 "num_base_bdevs_operational": 3, 00:08:32.551 "base_bdevs_list": [ 00:08:32.551 { 00:08:32.551 "name": "pt1", 00:08:32.551 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:32.551 "is_configured": true, 00:08:32.551 "data_offset": 2048, 00:08:32.551 "data_size": 63488 00:08:32.551 }, 00:08:32.551 { 00:08:32.551 "name": "pt2", 00:08:32.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:32.551 "is_configured": true, 00:08:32.552 "data_offset": 2048, 00:08:32.552 "data_size": 63488 00:08:32.552 }, 00:08:32.552 { 00:08:32.552 "name": "pt3", 00:08:32.552 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:32.552 "is_configured": true, 00:08:32.552 "data_offset": 2048, 00:08:32.552 "data_size": 63488 00:08:32.552 } 00:08:32.552 ] 00:08:32.552 } 00:08:32.552 } 00:08:32.552 }' 00:08:32.552 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:32.552 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:32.552 pt2 00:08:32.552 pt3' 00:08:32.552 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.552 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:32.552 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.552 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:32.552 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.552 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.812 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.812 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.812 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.812 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.812 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.813 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:32.813 11:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.813 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.813 11:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.813 [2024-11-27 11:46:59.096054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3c046302-baf0-4899-86c8-97b7a5e7420f 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3c046302-baf0-4899-86c8-97b7a5e7420f ']' 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.813 [2024-11-27 11:46:59.139697] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:32.813 [2024-11-27 11:46:59.139791] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.813 [2024-11-27 11:46:59.139935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.813 [2024-11-27 11:46:59.140035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.813 [2024-11-27 11:46:59.140086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:32.813 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.074 [2024-11-27 11:46:59.295786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:33.074 [2024-11-27 11:46:59.298029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:33.074 [2024-11-27 11:46:59.298143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:33.074 [2024-11-27 11:46:59.298228] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:33.074 [2024-11-27 11:46:59.298354] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:33.074 [2024-11-27 11:46:59.298378] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:33.074 [2024-11-27 11:46:59.298399] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.074 [2024-11-27 11:46:59.298412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:33.074 request: 00:08:33.074 { 00:08:33.074 "name": "raid_bdev1", 00:08:33.074 "raid_level": "raid0", 00:08:33.074 "base_bdevs": [ 00:08:33.074 "malloc1", 00:08:33.074 "malloc2", 00:08:33.074 "malloc3" 00:08:33.074 ], 00:08:33.074 "strip_size_kb": 64, 00:08:33.074 "superblock": false, 00:08:33.074 "method": "bdev_raid_create", 00:08:33.074 "req_id": 1 00:08:33.074 } 00:08:33.074 Got JSON-RPC error response 00:08:33.074 response: 00:08:33.074 { 00:08:33.074 "code": -17, 00:08:33.074 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:33.074 } 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.074 [2024-11-27 11:46:59.355730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:33.074 [2024-11-27 11:46:59.355909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.074 [2024-11-27 11:46:59.355955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:33.074 [2024-11-27 11:46:59.355996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.074 [2024-11-27 11:46:59.358501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.074 [2024-11-27 11:46:59.358593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:33.074 [2024-11-27 11:46:59.358727] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:33.074 [2024-11-27 11:46:59.358816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:33.074 pt1 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.074 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.075 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.075 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.075 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.075 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.075 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.075 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.075 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.075 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.075 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.075 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.075 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.075 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.075 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.075 "name": "raid_bdev1", 00:08:33.075 "uuid": "3c046302-baf0-4899-86c8-97b7a5e7420f", 00:08:33.075 "strip_size_kb": 64, 00:08:33.075 "state": "configuring", 00:08:33.075 "raid_level": "raid0", 00:08:33.075 "superblock": true, 00:08:33.075 "num_base_bdevs": 3, 00:08:33.075 "num_base_bdevs_discovered": 1, 00:08:33.075 "num_base_bdevs_operational": 3, 00:08:33.075 "base_bdevs_list": [ 00:08:33.075 { 00:08:33.075 "name": "pt1", 00:08:33.075 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.075 "is_configured": true, 00:08:33.075 "data_offset": 2048, 00:08:33.075 "data_size": 63488 00:08:33.075 }, 00:08:33.075 { 00:08:33.075 "name": null, 00:08:33.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.075 "is_configured": false, 00:08:33.075 "data_offset": 2048, 00:08:33.075 "data_size": 63488 00:08:33.075 }, 00:08:33.075 { 00:08:33.075 "name": null, 00:08:33.075 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:33.075 "is_configured": false, 00:08:33.075 "data_offset": 2048, 00:08:33.075 "data_size": 63488 00:08:33.075 } 00:08:33.075 ] 00:08:33.075 }' 00:08:33.075 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.075 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.643 [2024-11-27 11:46:59.847716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:33.643 [2024-11-27 11:46:59.847812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.643 [2024-11-27 11:46:59.847853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:33.643 [2024-11-27 11:46:59.847864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.643 [2024-11-27 11:46:59.848377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.643 [2024-11-27 11:46:59.848409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:33.643 [2024-11-27 11:46:59.848508] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:33.643 [2024-11-27 11:46:59.848542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:33.643 pt2 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.643 [2024-11-27 11:46:59.859732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.643 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.643 "name": "raid_bdev1", 00:08:33.643 "uuid": "3c046302-baf0-4899-86c8-97b7a5e7420f", 00:08:33.643 "strip_size_kb": 64, 00:08:33.643 "state": "configuring", 00:08:33.643 "raid_level": "raid0", 00:08:33.643 "superblock": true, 00:08:33.643 "num_base_bdevs": 3, 00:08:33.643 "num_base_bdevs_discovered": 1, 00:08:33.643 "num_base_bdevs_operational": 3, 00:08:33.643 "base_bdevs_list": [ 00:08:33.643 { 00:08:33.643 "name": "pt1", 00:08:33.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.643 "is_configured": true, 00:08:33.643 "data_offset": 2048, 00:08:33.643 "data_size": 63488 00:08:33.643 }, 00:08:33.643 { 00:08:33.643 "name": null, 00:08:33.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.644 "is_configured": false, 00:08:33.644 "data_offset": 0, 00:08:33.644 "data_size": 63488 00:08:33.644 }, 00:08:33.644 { 00:08:33.644 "name": null, 00:08:33.644 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:33.644 "is_configured": false, 00:08:33.644 "data_offset": 2048, 00:08:33.644 "data_size": 63488 00:08:33.644 } 00:08:33.644 ] 00:08:33.644 }' 00:08:33.644 11:46:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.644 11:46:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.213 [2024-11-27 11:47:00.367459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:34.213 [2024-11-27 11:47:00.367624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.213 [2024-11-27 11:47:00.367665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:34.213 [2024-11-27 11:47:00.367736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.213 [2024-11-27 11:47:00.368286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.213 [2024-11-27 11:47:00.368355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:34.213 [2024-11-27 11:47:00.368476] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:34.213 [2024-11-27 11:47:00.368534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:34.213 pt2 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.213 [2024-11-27 11:47:00.379408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:34.213 [2024-11-27 11:47:00.379570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.213 [2024-11-27 11:47:00.379608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:34.213 [2024-11-27 11:47:00.379660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.213 [2024-11-27 11:47:00.380155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.213 [2024-11-27 11:47:00.380224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:34.213 [2024-11-27 11:47:00.380330] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:34.213 [2024-11-27 11:47:00.380384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:34.213 [2024-11-27 11:47:00.380560] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:34.213 [2024-11-27 11:47:00.380603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:34.213 [2024-11-27 11:47:00.380914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:34.213 [2024-11-27 11:47:00.381105] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:34.213 [2024-11-27 11:47:00.381147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:34.213 [2024-11-27 11:47:00.381334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.213 pt3 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.213 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.213 "name": "raid_bdev1", 00:08:34.213 "uuid": "3c046302-baf0-4899-86c8-97b7a5e7420f", 00:08:34.213 "strip_size_kb": 64, 00:08:34.213 "state": "online", 00:08:34.213 "raid_level": "raid0", 00:08:34.213 "superblock": true, 00:08:34.213 "num_base_bdevs": 3, 00:08:34.213 "num_base_bdevs_discovered": 3, 00:08:34.213 "num_base_bdevs_operational": 3, 00:08:34.213 "base_bdevs_list": [ 00:08:34.213 { 00:08:34.213 "name": "pt1", 00:08:34.213 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.213 "is_configured": true, 00:08:34.213 "data_offset": 2048, 00:08:34.213 "data_size": 63488 00:08:34.213 }, 00:08:34.213 { 00:08:34.213 "name": "pt2", 00:08:34.213 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.213 "is_configured": true, 00:08:34.213 "data_offset": 2048, 00:08:34.213 "data_size": 63488 00:08:34.213 }, 00:08:34.214 { 00:08:34.214 "name": "pt3", 00:08:34.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:34.214 "is_configured": true, 00:08:34.214 "data_offset": 2048, 00:08:34.214 "data_size": 63488 00:08:34.214 } 00:08:34.214 ] 00:08:34.214 }' 00:08:34.214 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.214 11:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.782 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:34.782 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:34.782 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:34.782 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:34.782 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:34.782 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:34.782 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:34.782 11:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.782 11:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.782 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:34.782 [2024-11-27 11:47:00.878987] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.782 11:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.782 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:34.782 "name": "raid_bdev1", 00:08:34.782 "aliases": [ 00:08:34.782 "3c046302-baf0-4899-86c8-97b7a5e7420f" 00:08:34.782 ], 00:08:34.782 "product_name": "Raid Volume", 00:08:34.782 "block_size": 512, 00:08:34.782 "num_blocks": 190464, 00:08:34.782 "uuid": "3c046302-baf0-4899-86c8-97b7a5e7420f", 00:08:34.782 "assigned_rate_limits": { 00:08:34.782 "rw_ios_per_sec": 0, 00:08:34.782 "rw_mbytes_per_sec": 0, 00:08:34.782 "r_mbytes_per_sec": 0, 00:08:34.782 "w_mbytes_per_sec": 0 00:08:34.782 }, 00:08:34.782 "claimed": false, 00:08:34.782 "zoned": false, 00:08:34.782 "supported_io_types": { 00:08:34.782 "read": true, 00:08:34.782 "write": true, 00:08:34.782 "unmap": true, 00:08:34.782 "flush": true, 00:08:34.782 "reset": true, 00:08:34.782 "nvme_admin": false, 00:08:34.782 "nvme_io": false, 00:08:34.782 "nvme_io_md": false, 00:08:34.782 "write_zeroes": true, 00:08:34.782 "zcopy": false, 00:08:34.782 "get_zone_info": false, 00:08:34.782 "zone_management": false, 00:08:34.782 "zone_append": false, 00:08:34.782 "compare": false, 00:08:34.782 "compare_and_write": false, 00:08:34.782 "abort": false, 00:08:34.782 "seek_hole": false, 00:08:34.782 "seek_data": false, 00:08:34.782 "copy": false, 00:08:34.782 "nvme_iov_md": false 00:08:34.782 }, 00:08:34.782 "memory_domains": [ 00:08:34.782 { 00:08:34.782 "dma_device_id": "system", 00:08:34.782 "dma_device_type": 1 00:08:34.782 }, 00:08:34.783 { 00:08:34.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.783 "dma_device_type": 2 00:08:34.783 }, 00:08:34.783 { 00:08:34.783 "dma_device_id": "system", 00:08:34.783 "dma_device_type": 1 00:08:34.783 }, 00:08:34.783 { 00:08:34.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.783 "dma_device_type": 2 00:08:34.783 }, 00:08:34.783 { 00:08:34.783 "dma_device_id": "system", 00:08:34.783 "dma_device_type": 1 00:08:34.783 }, 00:08:34.783 { 00:08:34.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.783 "dma_device_type": 2 00:08:34.783 } 00:08:34.783 ], 00:08:34.783 "driver_specific": { 00:08:34.783 "raid": { 00:08:34.783 "uuid": "3c046302-baf0-4899-86c8-97b7a5e7420f", 00:08:34.783 "strip_size_kb": 64, 00:08:34.783 "state": "online", 00:08:34.783 "raid_level": "raid0", 00:08:34.783 "superblock": true, 00:08:34.783 "num_base_bdevs": 3, 00:08:34.783 "num_base_bdevs_discovered": 3, 00:08:34.783 "num_base_bdevs_operational": 3, 00:08:34.783 "base_bdevs_list": [ 00:08:34.783 { 00:08:34.783 "name": "pt1", 00:08:34.783 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.783 "is_configured": true, 00:08:34.783 "data_offset": 2048, 00:08:34.783 "data_size": 63488 00:08:34.783 }, 00:08:34.783 { 00:08:34.783 "name": "pt2", 00:08:34.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.783 "is_configured": true, 00:08:34.783 "data_offset": 2048, 00:08:34.783 "data_size": 63488 00:08:34.783 }, 00:08:34.783 { 00:08:34.783 "name": "pt3", 00:08:34.783 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:34.783 "is_configured": true, 00:08:34.783 "data_offset": 2048, 00:08:34.783 "data_size": 63488 00:08:34.783 } 00:08:34.783 ] 00:08:34.783 } 00:08:34.783 } 00:08:34.783 }' 00:08:34.783 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:34.783 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:34.783 pt2 00:08:34.783 pt3' 00:08:34.783 11:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.783 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:35.043 [2024-11-27 11:47:01.170448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3c046302-baf0-4899-86c8-97b7a5e7420f '!=' 3c046302-baf0-4899-86c8-97b7a5e7420f ']' 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65024 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65024 ']' 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65024 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65024 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65024' 00:08:35.043 killing process with pid 65024 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65024 00:08:35.043 [2024-11-27 11:47:01.255550] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:35.043 11:47:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65024 00:08:35.043 [2024-11-27 11:47:01.255732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.043 [2024-11-27 11:47:01.255846] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.043 [2024-11-27 11:47:01.255894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:35.349 [2024-11-27 11:47:01.578796] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:36.729 11:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:36.729 00:08:36.729 real 0m5.695s 00:08:36.729 user 0m8.263s 00:08:36.729 sys 0m0.946s 00:08:36.729 11:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.729 ************************************ 00:08:36.729 END TEST raid_superblock_test 00:08:36.729 ************************************ 00:08:36.729 11:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.729 11:47:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:36.729 11:47:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:36.729 11:47:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.729 11:47:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:36.729 ************************************ 00:08:36.729 START TEST raid_read_error_test 00:08:36.729 ************************************ 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rTaPKTvhrz 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65284 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65284 00:08:36.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65284 ']' 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.729 11:47:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.729 [2024-11-27 11:47:02.980257] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:08:36.729 [2024-11-27 11:47:02.980561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65284 ] 00:08:36.987 [2024-11-27 11:47:03.162343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.987 [2024-11-27 11:47:03.297212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.245 [2024-11-27 11:47:03.539258] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.245 [2024-11-27 11:47:03.539412] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.812 BaseBdev1_malloc 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.812 true 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.812 [2024-11-27 11:47:03.976653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:37.812 [2024-11-27 11:47:03.976816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.812 [2024-11-27 11:47:03.976907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:37.812 [2024-11-27 11:47:03.976953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.812 [2024-11-27 11:47:03.979645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.812 [2024-11-27 11:47:03.979760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:37.812 BaseBdev1 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.812 11:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.812 BaseBdev2_malloc 00:08:37.812 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.812 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:37.812 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.812 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.812 true 00:08:37.812 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.812 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:37.812 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.812 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.812 [2024-11-27 11:47:04.040120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:37.812 [2024-11-27 11:47:04.040240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.812 [2024-11-27 11:47:04.040295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:37.812 [2024-11-27 11:47:04.040334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.812 [2024-11-27 11:47:04.042822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.812 [2024-11-27 11:47:04.042877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:37.812 BaseBdev2 00:08:37.812 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.812 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.813 BaseBdev3_malloc 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.813 true 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.813 [2024-11-27 11:47:04.117520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:37.813 [2024-11-27 11:47:04.117635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.813 [2024-11-27 11:47:04.117698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:37.813 [2024-11-27 11:47:04.117715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.813 [2024-11-27 11:47:04.120326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.813 [2024-11-27 11:47:04.120375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:37.813 BaseBdev3 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.813 [2024-11-27 11:47:04.125606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.813 [2024-11-27 11:47:04.127863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.813 [2024-11-27 11:47:04.128029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:37.813 [2024-11-27 11:47:04.128339] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:37.813 [2024-11-27 11:47:04.128404] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:37.813 [2024-11-27 11:47:04.128785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:37.813 [2024-11-27 11:47:04.129058] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:37.813 [2024-11-27 11:47:04.129113] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:37.813 [2024-11-27 11:47:04.129336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.813 "name": "raid_bdev1", 00:08:37.813 "uuid": "d4dc08db-268b-4ad6-8ca6-47c879d19321", 00:08:37.813 "strip_size_kb": 64, 00:08:37.813 "state": "online", 00:08:37.813 "raid_level": "raid0", 00:08:37.813 "superblock": true, 00:08:37.813 "num_base_bdevs": 3, 00:08:37.813 "num_base_bdevs_discovered": 3, 00:08:37.813 "num_base_bdevs_operational": 3, 00:08:37.813 "base_bdevs_list": [ 00:08:37.813 { 00:08:37.813 "name": "BaseBdev1", 00:08:37.813 "uuid": "20d52609-75dc-58d2-b6e7-a8e0e7393c03", 00:08:37.813 "is_configured": true, 00:08:37.813 "data_offset": 2048, 00:08:37.813 "data_size": 63488 00:08:37.813 }, 00:08:37.813 { 00:08:37.813 "name": "BaseBdev2", 00:08:37.813 "uuid": "9e693352-2769-5a8b-92f3-9d19e3cd2b26", 00:08:37.813 "is_configured": true, 00:08:37.813 "data_offset": 2048, 00:08:37.813 "data_size": 63488 00:08:37.813 }, 00:08:37.813 { 00:08:37.813 "name": "BaseBdev3", 00:08:37.813 "uuid": "5f20673a-95b3-5383-9d87-3b7f4f5a91b7", 00:08:37.813 "is_configured": true, 00:08:37.813 "data_offset": 2048, 00:08:37.813 "data_size": 63488 00:08:37.813 } 00:08:37.813 ] 00:08:37.813 }' 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.813 11:47:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.379 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:38.379 11:47:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:38.379 [2024-11-27 11:47:04.694166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.322 "name": "raid_bdev1", 00:08:39.322 "uuid": "d4dc08db-268b-4ad6-8ca6-47c879d19321", 00:08:39.322 "strip_size_kb": 64, 00:08:39.322 "state": "online", 00:08:39.322 "raid_level": "raid0", 00:08:39.322 "superblock": true, 00:08:39.322 "num_base_bdevs": 3, 00:08:39.322 "num_base_bdevs_discovered": 3, 00:08:39.322 "num_base_bdevs_operational": 3, 00:08:39.322 "base_bdevs_list": [ 00:08:39.322 { 00:08:39.322 "name": "BaseBdev1", 00:08:39.322 "uuid": "20d52609-75dc-58d2-b6e7-a8e0e7393c03", 00:08:39.322 "is_configured": true, 00:08:39.322 "data_offset": 2048, 00:08:39.322 "data_size": 63488 00:08:39.322 }, 00:08:39.322 { 00:08:39.322 "name": "BaseBdev2", 00:08:39.322 "uuid": "9e693352-2769-5a8b-92f3-9d19e3cd2b26", 00:08:39.322 "is_configured": true, 00:08:39.322 "data_offset": 2048, 00:08:39.322 "data_size": 63488 00:08:39.322 }, 00:08:39.322 { 00:08:39.322 "name": "BaseBdev3", 00:08:39.322 "uuid": "5f20673a-95b3-5383-9d87-3b7f4f5a91b7", 00:08:39.322 "is_configured": true, 00:08:39.322 "data_offset": 2048, 00:08:39.322 "data_size": 63488 00:08:39.322 } 00:08:39.322 ] 00:08:39.322 }' 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.322 11:47:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.890 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:39.890 11:47:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.890 11:47:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.890 [2024-11-27 11:47:05.974660] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:39.890 [2024-11-27 11:47:05.974779] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.890 { 00:08:39.890 "results": [ 00:08:39.890 { 00:08:39.890 "job": "raid_bdev1", 00:08:39.890 "core_mask": "0x1", 00:08:39.890 "workload": "randrw", 00:08:39.890 "percentage": 50, 00:08:39.890 "status": "finished", 00:08:39.890 "queue_depth": 1, 00:08:39.890 "io_size": 131072, 00:08:39.890 "runtime": 1.280939, 00:08:39.890 "iops": 12742.214890794956, 00:08:39.890 "mibps": 1592.7768613493695, 00:08:39.890 "io_failed": 1, 00:08:39.890 "io_timeout": 0, 00:08:39.890 "avg_latency_us": 108.53144931456056, 00:08:39.890 "min_latency_us": 24.929257641921396, 00:08:39.890 "max_latency_us": 1760.0279475982534 00:08:39.890 } 00:08:39.890 ], 00:08:39.890 "core_count": 1 00:08:39.890 } 00:08:39.890 [2024-11-27 11:47:05.978169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.890 [2024-11-27 11:47:05.978225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.890 [2024-11-27 11:47:05.978269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.890 [2024-11-27 11:47:05.978280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:39.890 11:47:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.890 11:47:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65284 00:08:39.890 11:47:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65284 ']' 00:08:39.890 11:47:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65284 00:08:39.890 11:47:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:39.890 11:47:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.890 11:47:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65284 00:08:39.890 killing process with pid 65284 00:08:39.890 11:47:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.890 11:47:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.890 11:47:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65284' 00:08:39.890 11:47:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65284 00:08:39.890 11:47:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65284 00:08:39.890 [2024-11-27 11:47:06.015803] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:40.149 [2024-11-27 11:47:06.294027] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:41.526 11:47:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rTaPKTvhrz 00:08:41.526 11:47:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:41.526 11:47:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:41.526 11:47:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.78 00:08:41.526 11:47:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:41.526 11:47:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:41.526 11:47:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:41.526 11:47:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.78 != \0\.\0\0 ]] 00:08:41.526 00:08:41.526 real 0m4.852s 00:08:41.526 user 0m5.770s 00:08:41.526 sys 0m0.573s 00:08:41.526 11:47:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.526 11:47:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.526 ************************************ 00:08:41.526 END TEST raid_read_error_test 00:08:41.526 ************************************ 00:08:41.526 11:47:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:41.526 11:47:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:41.526 11:47:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.526 11:47:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:41.526 ************************************ 00:08:41.526 START TEST raid_write_error_test 00:08:41.526 ************************************ 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PsYxDYitvO 00:08:41.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65428 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65428 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65428 ']' 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.526 11:47:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.527 11:47:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.527 11:47:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:41.527 11:47:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.527 [2024-11-27 11:47:07.864030] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:08:41.527 [2024-11-27 11:47:07.864170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65428 ] 00:08:41.785 [2024-11-27 11:47:08.044180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.043 [2024-11-27 11:47:08.171193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.043 [2024-11-27 11:47:08.413486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.043 [2024-11-27 11:47:08.413538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.612 BaseBdev1_malloc 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.612 true 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.612 [2024-11-27 11:47:08.824574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:42.612 [2024-11-27 11:47:08.824696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.612 [2024-11-27 11:47:08.824755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:42.612 [2024-11-27 11:47:08.824795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.612 [2024-11-27 11:47:08.827411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.612 [2024-11-27 11:47:08.827463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:42.612 BaseBdev1 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.612 BaseBdev2_malloc 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.612 true 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.612 [2024-11-27 11:47:08.883138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:42.612 [2024-11-27 11:47:08.883269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.612 [2024-11-27 11:47:08.883313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:42.612 [2024-11-27 11:47:08.883350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.612 [2024-11-27 11:47:08.885987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.612 BaseBdev2 00:08:42.612 [2024-11-27 11:47:08.886079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.612 BaseBdev3_malloc 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.612 true 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.612 [2024-11-27 11:47:08.957218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:42.612 [2024-11-27 11:47:08.957357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.612 [2024-11-27 11:47:08.957425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:42.612 [2024-11-27 11:47:08.957476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.612 [2024-11-27 11:47:08.960110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.612 BaseBdev3 00:08:42.612 [2024-11-27 11:47:08.960223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.612 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.612 [2024-11-27 11:47:08.965299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.612 [2024-11-27 11:47:08.967456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:42.612 [2024-11-27 11:47:08.967624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:42.612 [2024-11-27 11:47:08.967899] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:42.612 [2024-11-27 11:47:08.967919] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:42.612 [2024-11-27 11:47:08.968259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:42.613 [2024-11-27 11:47:08.968470] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:42.613 [2024-11-27 11:47:08.968486] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:42.613 [2024-11-27 11:47:08.968700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.613 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.613 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:42.613 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.613 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.613 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.613 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.613 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.613 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.613 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.613 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.613 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.613 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.613 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.613 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.613 11:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.613 11:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.872 11:47:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.872 "name": "raid_bdev1", 00:08:42.872 "uuid": "1cffcfb5-7586-429e-b2af-4e590b682eec", 00:08:42.872 "strip_size_kb": 64, 00:08:42.872 "state": "online", 00:08:42.872 "raid_level": "raid0", 00:08:42.872 "superblock": true, 00:08:42.872 "num_base_bdevs": 3, 00:08:42.872 "num_base_bdevs_discovered": 3, 00:08:42.872 "num_base_bdevs_operational": 3, 00:08:42.872 "base_bdevs_list": [ 00:08:42.872 { 00:08:42.872 "name": "BaseBdev1", 00:08:42.872 "uuid": "d300ce88-6624-511f-8641-46b8b1e64567", 00:08:42.872 "is_configured": true, 00:08:42.872 "data_offset": 2048, 00:08:42.872 "data_size": 63488 00:08:42.872 }, 00:08:42.872 { 00:08:42.872 "name": "BaseBdev2", 00:08:42.872 "uuid": "3d16f868-d10c-56b2-b2e7-cd4021cf4408", 00:08:42.872 "is_configured": true, 00:08:42.872 "data_offset": 2048, 00:08:42.872 "data_size": 63488 00:08:42.872 }, 00:08:42.872 { 00:08:42.872 "name": "BaseBdev3", 00:08:42.872 "uuid": "ce14b006-5f07-5836-8b09-e283ab40ed1a", 00:08:42.872 "is_configured": true, 00:08:42.872 "data_offset": 2048, 00:08:42.872 "data_size": 63488 00:08:42.872 } 00:08:42.872 ] 00:08:42.872 }' 00:08:42.872 11:47:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.872 11:47:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.130 11:47:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:43.130 11:47:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:43.388 [2024-11-27 11:47:09.550085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:44.332 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:44.332 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.332 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.332 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.332 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:44.332 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:44.332 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:44.332 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:44.332 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.332 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.332 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.332 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.332 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.332 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.332 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.332 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.332 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.333 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.333 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.333 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.333 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.333 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.333 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.333 "name": "raid_bdev1", 00:08:44.333 "uuid": "1cffcfb5-7586-429e-b2af-4e590b682eec", 00:08:44.333 "strip_size_kb": 64, 00:08:44.333 "state": "online", 00:08:44.333 "raid_level": "raid0", 00:08:44.333 "superblock": true, 00:08:44.333 "num_base_bdevs": 3, 00:08:44.333 "num_base_bdevs_discovered": 3, 00:08:44.333 "num_base_bdevs_operational": 3, 00:08:44.333 "base_bdevs_list": [ 00:08:44.333 { 00:08:44.333 "name": "BaseBdev1", 00:08:44.333 "uuid": "d300ce88-6624-511f-8641-46b8b1e64567", 00:08:44.333 "is_configured": true, 00:08:44.333 "data_offset": 2048, 00:08:44.333 "data_size": 63488 00:08:44.333 }, 00:08:44.333 { 00:08:44.333 "name": "BaseBdev2", 00:08:44.333 "uuid": "3d16f868-d10c-56b2-b2e7-cd4021cf4408", 00:08:44.333 "is_configured": true, 00:08:44.333 "data_offset": 2048, 00:08:44.333 "data_size": 63488 00:08:44.333 }, 00:08:44.333 { 00:08:44.333 "name": "BaseBdev3", 00:08:44.333 "uuid": "ce14b006-5f07-5836-8b09-e283ab40ed1a", 00:08:44.333 "is_configured": true, 00:08:44.333 "data_offset": 2048, 00:08:44.333 "data_size": 63488 00:08:44.333 } 00:08:44.333 ] 00:08:44.333 }' 00:08:44.333 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.333 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.592 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:44.592 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.593 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.593 [2024-11-27 11:47:10.843352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.593 [2024-11-27 11:47:10.843486] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.593 [2024-11-27 11:47:10.846952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.593 [2024-11-27 11:47:10.847079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.593 [2024-11-27 11:47:10.847148] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.593 [2024-11-27 11:47:10.847203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, sta{ 00:08:44.593 "results": [ 00:08:44.593 { 00:08:44.593 "job": "raid_bdev1", 00:08:44.593 "core_mask": "0x1", 00:08:44.593 "workload": "randrw", 00:08:44.593 "percentage": 50, 00:08:44.593 "status": "finished", 00:08:44.593 "queue_depth": 1, 00:08:44.593 "io_size": 131072, 00:08:44.593 "runtime": 1.293817, 00:08:44.593 "iops": 13027.344670846032, 00:08:44.593 "mibps": 1628.418083855754, 00:08:44.593 "io_failed": 1, 00:08:44.593 "io_timeout": 0, 00:08:44.593 "avg_latency_us": 106.23159648748299, 00:08:44.593 "min_latency_us": 23.252401746724892, 00:08:44.593 "max_latency_us": 1760.0279475982534 00:08:44.593 } 00:08:44.593 ], 00:08:44.593 "core_count": 1 00:08:44.593 } 00:08:44.593 te offline 00:08:44.593 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.593 11:47:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65428 00:08:44.593 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65428 ']' 00:08:44.593 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65428 00:08:44.593 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:44.593 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.593 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65428 00:08:44.593 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.593 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.593 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65428' 00:08:44.593 killing process with pid 65428 00:08:44.593 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65428 00:08:44.593 [2024-11-27 11:47:10.891874] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.593 11:47:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65428 00:08:44.852 [2024-11-27 11:47:11.176436] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.229 11:47:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PsYxDYitvO 00:08:46.229 11:47:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:46.229 11:47:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:46.229 ************************************ 00:08:46.229 END TEST raid_write_error_test 00:08:46.229 ************************************ 00:08:46.230 11:47:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:08:46.230 11:47:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:46.230 11:47:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:46.230 11:47:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:46.230 11:47:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:08:46.230 00:08:46.230 real 0m4.856s 00:08:46.230 user 0m5.795s 00:08:46.230 sys 0m0.537s 00:08:46.230 11:47:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.230 11:47:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.488 11:47:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:46.488 11:47:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:46.488 11:47:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:46.488 11:47:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.488 11:47:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.488 ************************************ 00:08:46.488 START TEST raid_state_function_test 00:08:46.488 ************************************ 00:08:46.488 11:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:46.489 Process raid pid: 65576 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65576 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65576' 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65576 00:08:46.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65576 ']' 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.489 11:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.489 [2024-11-27 11:47:12.785749] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:08:46.489 [2024-11-27 11:47:12.786032] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.747 [2024-11-27 11:47:12.970045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.747 [2024-11-27 11:47:13.108571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.006 [2024-11-27 11:47:13.357965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.006 [2024-11-27 11:47:13.358020] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.575 [2024-11-27 11:47:13.687627] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.575 [2024-11-27 11:47:13.687702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.575 [2024-11-27 11:47:13.687715] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.575 [2024-11-27 11:47:13.687728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.575 [2024-11-27 11:47:13.687735] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:47.575 [2024-11-27 11:47:13.687747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.575 "name": "Existed_Raid", 00:08:47.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.575 "strip_size_kb": 64, 00:08:47.575 "state": "configuring", 00:08:47.575 "raid_level": "concat", 00:08:47.575 "superblock": false, 00:08:47.575 "num_base_bdevs": 3, 00:08:47.575 "num_base_bdevs_discovered": 0, 00:08:47.575 "num_base_bdevs_operational": 3, 00:08:47.575 "base_bdevs_list": [ 00:08:47.575 { 00:08:47.575 "name": "BaseBdev1", 00:08:47.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.575 "is_configured": false, 00:08:47.575 "data_offset": 0, 00:08:47.575 "data_size": 0 00:08:47.575 }, 00:08:47.575 { 00:08:47.575 "name": "BaseBdev2", 00:08:47.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.575 "is_configured": false, 00:08:47.575 "data_offset": 0, 00:08:47.575 "data_size": 0 00:08:47.575 }, 00:08:47.575 { 00:08:47.575 "name": "BaseBdev3", 00:08:47.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.575 "is_configured": false, 00:08:47.575 "data_offset": 0, 00:08:47.575 "data_size": 0 00:08:47.575 } 00:08:47.575 ] 00:08:47.575 }' 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.575 11:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.834 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.834 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.834 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.834 [2024-11-27 11:47:14.174748] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.834 [2024-11-27 11:47:14.174798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:47.834 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.834 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.834 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.834 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.834 [2024-11-27 11:47:14.182737] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.834 [2024-11-27 11:47:14.182796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.834 [2024-11-27 11:47:14.182808] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.834 [2024-11-27 11:47:14.182819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.834 [2024-11-27 11:47:14.182827] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:47.834 [2024-11-27 11:47:14.182848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:47.834 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.834 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:47.834 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.834 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.092 [2024-11-27 11:47:14.232793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.092 BaseBdev1 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.092 [ 00:08:48.092 { 00:08:48.092 "name": "BaseBdev1", 00:08:48.092 "aliases": [ 00:08:48.092 "769c0757-0f00-4a16-bced-271189d50ce6" 00:08:48.092 ], 00:08:48.092 "product_name": "Malloc disk", 00:08:48.092 "block_size": 512, 00:08:48.092 "num_blocks": 65536, 00:08:48.092 "uuid": "769c0757-0f00-4a16-bced-271189d50ce6", 00:08:48.092 "assigned_rate_limits": { 00:08:48.092 "rw_ios_per_sec": 0, 00:08:48.092 "rw_mbytes_per_sec": 0, 00:08:48.092 "r_mbytes_per_sec": 0, 00:08:48.092 "w_mbytes_per_sec": 0 00:08:48.092 }, 00:08:48.092 "claimed": true, 00:08:48.092 "claim_type": "exclusive_write", 00:08:48.092 "zoned": false, 00:08:48.092 "supported_io_types": { 00:08:48.092 "read": true, 00:08:48.092 "write": true, 00:08:48.092 "unmap": true, 00:08:48.092 "flush": true, 00:08:48.092 "reset": true, 00:08:48.092 "nvme_admin": false, 00:08:48.092 "nvme_io": false, 00:08:48.092 "nvme_io_md": false, 00:08:48.092 "write_zeroes": true, 00:08:48.092 "zcopy": true, 00:08:48.092 "get_zone_info": false, 00:08:48.092 "zone_management": false, 00:08:48.092 "zone_append": false, 00:08:48.092 "compare": false, 00:08:48.092 "compare_and_write": false, 00:08:48.092 "abort": true, 00:08:48.092 "seek_hole": false, 00:08:48.092 "seek_data": false, 00:08:48.092 "copy": true, 00:08:48.092 "nvme_iov_md": false 00:08:48.092 }, 00:08:48.092 "memory_domains": [ 00:08:48.092 { 00:08:48.092 "dma_device_id": "system", 00:08:48.092 "dma_device_type": 1 00:08:48.092 }, 00:08:48.092 { 00:08:48.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.092 "dma_device_type": 2 00:08:48.092 } 00:08:48.092 ], 00:08:48.092 "driver_specific": {} 00:08:48.092 } 00:08:48.092 ] 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.092 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.093 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.093 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.093 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.093 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.093 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.093 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.093 "name": "Existed_Raid", 00:08:48.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.093 "strip_size_kb": 64, 00:08:48.093 "state": "configuring", 00:08:48.093 "raid_level": "concat", 00:08:48.093 "superblock": false, 00:08:48.093 "num_base_bdevs": 3, 00:08:48.093 "num_base_bdevs_discovered": 1, 00:08:48.093 "num_base_bdevs_operational": 3, 00:08:48.093 "base_bdevs_list": [ 00:08:48.093 { 00:08:48.093 "name": "BaseBdev1", 00:08:48.093 "uuid": "769c0757-0f00-4a16-bced-271189d50ce6", 00:08:48.093 "is_configured": true, 00:08:48.093 "data_offset": 0, 00:08:48.093 "data_size": 65536 00:08:48.093 }, 00:08:48.093 { 00:08:48.093 "name": "BaseBdev2", 00:08:48.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.093 "is_configured": false, 00:08:48.093 "data_offset": 0, 00:08:48.093 "data_size": 0 00:08:48.093 }, 00:08:48.093 { 00:08:48.093 "name": "BaseBdev3", 00:08:48.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.093 "is_configured": false, 00:08:48.093 "data_offset": 0, 00:08:48.093 "data_size": 0 00:08:48.093 } 00:08:48.093 ] 00:08:48.093 }' 00:08:48.093 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.093 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.660 [2024-11-27 11:47:14.768034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.660 [2024-11-27 11:47:14.768225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.660 [2024-11-27 11:47:14.776098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.660 [2024-11-27 11:47:14.778236] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.660 [2024-11-27 11:47:14.778284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.660 [2024-11-27 11:47:14.778296] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.660 [2024-11-27 11:47:14.778307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.660 "name": "Existed_Raid", 00:08:48.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.660 "strip_size_kb": 64, 00:08:48.660 "state": "configuring", 00:08:48.660 "raid_level": "concat", 00:08:48.660 "superblock": false, 00:08:48.660 "num_base_bdevs": 3, 00:08:48.660 "num_base_bdevs_discovered": 1, 00:08:48.660 "num_base_bdevs_operational": 3, 00:08:48.660 "base_bdevs_list": [ 00:08:48.660 { 00:08:48.660 "name": "BaseBdev1", 00:08:48.660 "uuid": "769c0757-0f00-4a16-bced-271189d50ce6", 00:08:48.660 "is_configured": true, 00:08:48.660 "data_offset": 0, 00:08:48.660 "data_size": 65536 00:08:48.660 }, 00:08:48.660 { 00:08:48.660 "name": "BaseBdev2", 00:08:48.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.660 "is_configured": false, 00:08:48.660 "data_offset": 0, 00:08:48.660 "data_size": 0 00:08:48.660 }, 00:08:48.660 { 00:08:48.660 "name": "BaseBdev3", 00:08:48.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.660 "is_configured": false, 00:08:48.660 "data_offset": 0, 00:08:48.660 "data_size": 0 00:08:48.660 } 00:08:48.660 ] 00:08:48.660 }' 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.660 11:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.919 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.919 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.919 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.919 [2024-11-27 11:47:15.289763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.919 BaseBdev2 00:08:48.919 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.919 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:48.919 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:48.919 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.919 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.919 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.919 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.919 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.919 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.919 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.179 [ 00:08:49.179 { 00:08:49.179 "name": "BaseBdev2", 00:08:49.179 "aliases": [ 00:08:49.179 "a6ee639c-cc48-4035-8d41-8130e1b9b731" 00:08:49.179 ], 00:08:49.179 "product_name": "Malloc disk", 00:08:49.179 "block_size": 512, 00:08:49.179 "num_blocks": 65536, 00:08:49.179 "uuid": "a6ee639c-cc48-4035-8d41-8130e1b9b731", 00:08:49.179 "assigned_rate_limits": { 00:08:49.179 "rw_ios_per_sec": 0, 00:08:49.179 "rw_mbytes_per_sec": 0, 00:08:49.179 "r_mbytes_per_sec": 0, 00:08:49.179 "w_mbytes_per_sec": 0 00:08:49.179 }, 00:08:49.179 "claimed": true, 00:08:49.179 "claim_type": "exclusive_write", 00:08:49.179 "zoned": false, 00:08:49.179 "supported_io_types": { 00:08:49.179 "read": true, 00:08:49.179 "write": true, 00:08:49.179 "unmap": true, 00:08:49.179 "flush": true, 00:08:49.179 "reset": true, 00:08:49.179 "nvme_admin": false, 00:08:49.179 "nvme_io": false, 00:08:49.179 "nvme_io_md": false, 00:08:49.179 "write_zeroes": true, 00:08:49.179 "zcopy": true, 00:08:49.179 "get_zone_info": false, 00:08:49.179 "zone_management": false, 00:08:49.179 "zone_append": false, 00:08:49.179 "compare": false, 00:08:49.179 "compare_and_write": false, 00:08:49.179 "abort": true, 00:08:49.179 "seek_hole": false, 00:08:49.179 "seek_data": false, 00:08:49.179 "copy": true, 00:08:49.179 "nvme_iov_md": false 00:08:49.179 }, 00:08:49.179 "memory_domains": [ 00:08:49.179 { 00:08:49.179 "dma_device_id": "system", 00:08:49.179 "dma_device_type": 1 00:08:49.179 }, 00:08:49.179 { 00:08:49.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.179 "dma_device_type": 2 00:08:49.179 } 00:08:49.179 ], 00:08:49.179 "driver_specific": {} 00:08:49.179 } 00:08:49.179 ] 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.179 "name": "Existed_Raid", 00:08:49.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.179 "strip_size_kb": 64, 00:08:49.179 "state": "configuring", 00:08:49.179 "raid_level": "concat", 00:08:49.179 "superblock": false, 00:08:49.179 "num_base_bdevs": 3, 00:08:49.179 "num_base_bdevs_discovered": 2, 00:08:49.179 "num_base_bdevs_operational": 3, 00:08:49.179 "base_bdevs_list": [ 00:08:49.179 { 00:08:49.179 "name": "BaseBdev1", 00:08:49.179 "uuid": "769c0757-0f00-4a16-bced-271189d50ce6", 00:08:49.179 "is_configured": true, 00:08:49.179 "data_offset": 0, 00:08:49.179 "data_size": 65536 00:08:49.179 }, 00:08:49.179 { 00:08:49.179 "name": "BaseBdev2", 00:08:49.179 "uuid": "a6ee639c-cc48-4035-8d41-8130e1b9b731", 00:08:49.179 "is_configured": true, 00:08:49.179 "data_offset": 0, 00:08:49.179 "data_size": 65536 00:08:49.179 }, 00:08:49.179 { 00:08:49.179 "name": "BaseBdev3", 00:08:49.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.179 "is_configured": false, 00:08:49.179 "data_offset": 0, 00:08:49.179 "data_size": 0 00:08:49.179 } 00:08:49.179 ] 00:08:49.179 }' 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.179 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.438 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:49.438 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.438 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.697 [2024-11-27 11:47:15.827482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:49.697 [2024-11-27 11:47:15.827672] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:49.697 [2024-11-27 11:47:15.827709] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:49.697 [2024-11-27 11:47:15.828081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:49.697 [2024-11-27 11:47:15.828334] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:49.697 [2024-11-27 11:47:15.828385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:49.697 [2024-11-27 11:47:15.828740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.697 BaseBdev3 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.697 [ 00:08:49.697 { 00:08:49.697 "name": "BaseBdev3", 00:08:49.697 "aliases": [ 00:08:49.697 "7214f177-3caa-4ea9-a14b-7c057f7826a0" 00:08:49.697 ], 00:08:49.697 "product_name": "Malloc disk", 00:08:49.697 "block_size": 512, 00:08:49.697 "num_blocks": 65536, 00:08:49.697 "uuid": "7214f177-3caa-4ea9-a14b-7c057f7826a0", 00:08:49.697 "assigned_rate_limits": { 00:08:49.697 "rw_ios_per_sec": 0, 00:08:49.697 "rw_mbytes_per_sec": 0, 00:08:49.697 "r_mbytes_per_sec": 0, 00:08:49.697 "w_mbytes_per_sec": 0 00:08:49.697 }, 00:08:49.697 "claimed": true, 00:08:49.697 "claim_type": "exclusive_write", 00:08:49.697 "zoned": false, 00:08:49.697 "supported_io_types": { 00:08:49.697 "read": true, 00:08:49.697 "write": true, 00:08:49.697 "unmap": true, 00:08:49.697 "flush": true, 00:08:49.697 "reset": true, 00:08:49.697 "nvme_admin": false, 00:08:49.697 "nvme_io": false, 00:08:49.697 "nvme_io_md": false, 00:08:49.697 "write_zeroes": true, 00:08:49.697 "zcopy": true, 00:08:49.697 "get_zone_info": false, 00:08:49.697 "zone_management": false, 00:08:49.697 "zone_append": false, 00:08:49.697 "compare": false, 00:08:49.697 "compare_and_write": false, 00:08:49.697 "abort": true, 00:08:49.697 "seek_hole": false, 00:08:49.697 "seek_data": false, 00:08:49.697 "copy": true, 00:08:49.697 "nvme_iov_md": false 00:08:49.697 }, 00:08:49.697 "memory_domains": [ 00:08:49.697 { 00:08:49.697 "dma_device_id": "system", 00:08:49.697 "dma_device_type": 1 00:08:49.697 }, 00:08:49.697 { 00:08:49.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.697 "dma_device_type": 2 00:08:49.697 } 00:08:49.697 ], 00:08:49.697 "driver_specific": {} 00:08:49.697 } 00:08:49.697 ] 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.697 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.698 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.698 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.698 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.698 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.698 "name": "Existed_Raid", 00:08:49.698 "uuid": "31cc2e82-ec3b-432c-85b7-a640a025b657", 00:08:49.698 "strip_size_kb": 64, 00:08:49.698 "state": "online", 00:08:49.698 "raid_level": "concat", 00:08:49.698 "superblock": false, 00:08:49.698 "num_base_bdevs": 3, 00:08:49.698 "num_base_bdevs_discovered": 3, 00:08:49.698 "num_base_bdevs_operational": 3, 00:08:49.698 "base_bdevs_list": [ 00:08:49.698 { 00:08:49.698 "name": "BaseBdev1", 00:08:49.698 "uuid": "769c0757-0f00-4a16-bced-271189d50ce6", 00:08:49.698 "is_configured": true, 00:08:49.698 "data_offset": 0, 00:08:49.698 "data_size": 65536 00:08:49.698 }, 00:08:49.698 { 00:08:49.698 "name": "BaseBdev2", 00:08:49.698 "uuid": "a6ee639c-cc48-4035-8d41-8130e1b9b731", 00:08:49.698 "is_configured": true, 00:08:49.698 "data_offset": 0, 00:08:49.698 "data_size": 65536 00:08:49.698 }, 00:08:49.698 { 00:08:49.698 "name": "BaseBdev3", 00:08:49.698 "uuid": "7214f177-3caa-4ea9-a14b-7c057f7826a0", 00:08:49.698 "is_configured": true, 00:08:49.698 "data_offset": 0, 00:08:49.698 "data_size": 65536 00:08:49.698 } 00:08:49.698 ] 00:08:49.698 }' 00:08:49.698 11:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.698 11:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.264 [2024-11-27 11:47:16.355068] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:50.264 "name": "Existed_Raid", 00:08:50.264 "aliases": [ 00:08:50.264 "31cc2e82-ec3b-432c-85b7-a640a025b657" 00:08:50.264 ], 00:08:50.264 "product_name": "Raid Volume", 00:08:50.264 "block_size": 512, 00:08:50.264 "num_blocks": 196608, 00:08:50.264 "uuid": "31cc2e82-ec3b-432c-85b7-a640a025b657", 00:08:50.264 "assigned_rate_limits": { 00:08:50.264 "rw_ios_per_sec": 0, 00:08:50.264 "rw_mbytes_per_sec": 0, 00:08:50.264 "r_mbytes_per_sec": 0, 00:08:50.264 "w_mbytes_per_sec": 0 00:08:50.264 }, 00:08:50.264 "claimed": false, 00:08:50.264 "zoned": false, 00:08:50.264 "supported_io_types": { 00:08:50.264 "read": true, 00:08:50.264 "write": true, 00:08:50.264 "unmap": true, 00:08:50.264 "flush": true, 00:08:50.264 "reset": true, 00:08:50.264 "nvme_admin": false, 00:08:50.264 "nvme_io": false, 00:08:50.264 "nvme_io_md": false, 00:08:50.264 "write_zeroes": true, 00:08:50.264 "zcopy": false, 00:08:50.264 "get_zone_info": false, 00:08:50.264 "zone_management": false, 00:08:50.264 "zone_append": false, 00:08:50.264 "compare": false, 00:08:50.264 "compare_and_write": false, 00:08:50.264 "abort": false, 00:08:50.264 "seek_hole": false, 00:08:50.264 "seek_data": false, 00:08:50.264 "copy": false, 00:08:50.264 "nvme_iov_md": false 00:08:50.264 }, 00:08:50.264 "memory_domains": [ 00:08:50.264 { 00:08:50.264 "dma_device_id": "system", 00:08:50.264 "dma_device_type": 1 00:08:50.264 }, 00:08:50.264 { 00:08:50.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.264 "dma_device_type": 2 00:08:50.264 }, 00:08:50.264 { 00:08:50.264 "dma_device_id": "system", 00:08:50.264 "dma_device_type": 1 00:08:50.264 }, 00:08:50.264 { 00:08:50.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.264 "dma_device_type": 2 00:08:50.264 }, 00:08:50.264 { 00:08:50.264 "dma_device_id": "system", 00:08:50.264 "dma_device_type": 1 00:08:50.264 }, 00:08:50.264 { 00:08:50.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.264 "dma_device_type": 2 00:08:50.264 } 00:08:50.264 ], 00:08:50.264 "driver_specific": { 00:08:50.264 "raid": { 00:08:50.264 "uuid": "31cc2e82-ec3b-432c-85b7-a640a025b657", 00:08:50.264 "strip_size_kb": 64, 00:08:50.264 "state": "online", 00:08:50.264 "raid_level": "concat", 00:08:50.264 "superblock": false, 00:08:50.264 "num_base_bdevs": 3, 00:08:50.264 "num_base_bdevs_discovered": 3, 00:08:50.264 "num_base_bdevs_operational": 3, 00:08:50.264 "base_bdevs_list": [ 00:08:50.264 { 00:08:50.264 "name": "BaseBdev1", 00:08:50.264 "uuid": "769c0757-0f00-4a16-bced-271189d50ce6", 00:08:50.264 "is_configured": true, 00:08:50.264 "data_offset": 0, 00:08:50.264 "data_size": 65536 00:08:50.264 }, 00:08:50.264 { 00:08:50.264 "name": "BaseBdev2", 00:08:50.264 "uuid": "a6ee639c-cc48-4035-8d41-8130e1b9b731", 00:08:50.264 "is_configured": true, 00:08:50.264 "data_offset": 0, 00:08:50.264 "data_size": 65536 00:08:50.264 }, 00:08:50.264 { 00:08:50.264 "name": "BaseBdev3", 00:08:50.264 "uuid": "7214f177-3caa-4ea9-a14b-7c057f7826a0", 00:08:50.264 "is_configured": true, 00:08:50.264 "data_offset": 0, 00:08:50.264 "data_size": 65536 00:08:50.264 } 00:08:50.264 ] 00:08:50.264 } 00:08:50.264 } 00:08:50.264 }' 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:50.264 BaseBdev2 00:08:50.264 BaseBdev3' 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.264 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.265 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.265 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.265 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.265 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:50.265 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.265 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.265 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.265 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.523 [2024-11-27 11:47:16.666243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:50.523 [2024-11-27 11:47:16.666373] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.523 [2024-11-27 11:47:16.666448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.523 "name": "Existed_Raid", 00:08:50.523 "uuid": "31cc2e82-ec3b-432c-85b7-a640a025b657", 00:08:50.523 "strip_size_kb": 64, 00:08:50.523 "state": "offline", 00:08:50.523 "raid_level": "concat", 00:08:50.523 "superblock": false, 00:08:50.523 "num_base_bdevs": 3, 00:08:50.523 "num_base_bdevs_discovered": 2, 00:08:50.523 "num_base_bdevs_operational": 2, 00:08:50.523 "base_bdevs_list": [ 00:08:50.523 { 00:08:50.523 "name": null, 00:08:50.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.523 "is_configured": false, 00:08:50.523 "data_offset": 0, 00:08:50.523 "data_size": 65536 00:08:50.523 }, 00:08:50.523 { 00:08:50.523 "name": "BaseBdev2", 00:08:50.523 "uuid": "a6ee639c-cc48-4035-8d41-8130e1b9b731", 00:08:50.523 "is_configured": true, 00:08:50.523 "data_offset": 0, 00:08:50.523 "data_size": 65536 00:08:50.523 }, 00:08:50.523 { 00:08:50.523 "name": "BaseBdev3", 00:08:50.523 "uuid": "7214f177-3caa-4ea9-a14b-7c057f7826a0", 00:08:50.523 "is_configured": true, 00:08:50.523 "data_offset": 0, 00:08:50.523 "data_size": 65536 00:08:50.523 } 00:08:50.523 ] 00:08:50.523 }' 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.523 11:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.092 [2024-11-27 11:47:17.309463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.092 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.351 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:51.351 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:51.351 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:51.351 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.351 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.351 [2024-11-27 11:47:17.484146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:51.351 [2024-11-27 11:47:17.484299] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:51.351 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.351 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:51.351 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.351 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.351 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:51.351 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.351 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.351 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.351 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:51.351 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.352 BaseBdev2 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.352 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.352 [ 00:08:51.352 { 00:08:51.352 "name": "BaseBdev2", 00:08:51.352 "aliases": [ 00:08:51.352 "59d4936e-88cb-476f-8e6b-9cc91bd8180a" 00:08:51.352 ], 00:08:51.352 "product_name": "Malloc disk", 00:08:51.352 "block_size": 512, 00:08:51.352 "num_blocks": 65536, 00:08:51.352 "uuid": "59d4936e-88cb-476f-8e6b-9cc91bd8180a", 00:08:51.352 "assigned_rate_limits": { 00:08:51.352 "rw_ios_per_sec": 0, 00:08:51.352 "rw_mbytes_per_sec": 0, 00:08:51.352 "r_mbytes_per_sec": 0, 00:08:51.352 "w_mbytes_per_sec": 0 00:08:51.352 }, 00:08:51.352 "claimed": false, 00:08:51.352 "zoned": false, 00:08:51.352 "supported_io_types": { 00:08:51.352 "read": true, 00:08:51.352 "write": true, 00:08:51.352 "unmap": true, 00:08:51.352 "flush": true, 00:08:51.352 "reset": true, 00:08:51.352 "nvme_admin": false, 00:08:51.352 "nvme_io": false, 00:08:51.352 "nvme_io_md": false, 00:08:51.352 "write_zeroes": true, 00:08:51.352 "zcopy": true, 00:08:51.352 "get_zone_info": false, 00:08:51.352 "zone_management": false, 00:08:51.352 "zone_append": false, 00:08:51.352 "compare": false, 00:08:51.352 "compare_and_write": false, 00:08:51.352 "abort": true, 00:08:51.352 "seek_hole": false, 00:08:51.352 "seek_data": false, 00:08:51.352 "copy": true, 00:08:51.352 "nvme_iov_md": false 00:08:51.352 }, 00:08:51.352 "memory_domains": [ 00:08:51.352 { 00:08:51.352 "dma_device_id": "system", 00:08:51.352 "dma_device_type": 1 00:08:51.352 }, 00:08:51.352 { 00:08:51.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.610 "dma_device_type": 2 00:08:51.610 } 00:08:51.610 ], 00:08:51.610 "driver_specific": {} 00:08:51.610 } 00:08:51.610 ] 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.610 BaseBdev3 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.610 [ 00:08:51.610 { 00:08:51.610 "name": "BaseBdev3", 00:08:51.610 "aliases": [ 00:08:51.610 "37da8b51-2075-4e63-993a-c1b18b7a8fe7" 00:08:51.610 ], 00:08:51.610 "product_name": "Malloc disk", 00:08:51.610 "block_size": 512, 00:08:51.610 "num_blocks": 65536, 00:08:51.610 "uuid": "37da8b51-2075-4e63-993a-c1b18b7a8fe7", 00:08:51.610 "assigned_rate_limits": { 00:08:51.610 "rw_ios_per_sec": 0, 00:08:51.610 "rw_mbytes_per_sec": 0, 00:08:51.610 "r_mbytes_per_sec": 0, 00:08:51.610 "w_mbytes_per_sec": 0 00:08:51.610 }, 00:08:51.610 "claimed": false, 00:08:51.610 "zoned": false, 00:08:51.610 "supported_io_types": { 00:08:51.610 "read": true, 00:08:51.610 "write": true, 00:08:51.610 "unmap": true, 00:08:51.610 "flush": true, 00:08:51.610 "reset": true, 00:08:51.610 "nvme_admin": false, 00:08:51.610 "nvme_io": false, 00:08:51.610 "nvme_io_md": false, 00:08:51.610 "write_zeroes": true, 00:08:51.610 "zcopy": true, 00:08:51.610 "get_zone_info": false, 00:08:51.610 "zone_management": false, 00:08:51.610 "zone_append": false, 00:08:51.610 "compare": false, 00:08:51.610 "compare_and_write": false, 00:08:51.610 "abort": true, 00:08:51.610 "seek_hole": false, 00:08:51.610 "seek_data": false, 00:08:51.610 "copy": true, 00:08:51.610 "nvme_iov_md": false 00:08:51.610 }, 00:08:51.610 "memory_domains": [ 00:08:51.610 { 00:08:51.610 "dma_device_id": "system", 00:08:51.610 "dma_device_type": 1 00:08:51.610 }, 00:08:51.610 { 00:08:51.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.610 "dma_device_type": 2 00:08:51.610 } 00:08:51.610 ], 00:08:51.610 "driver_specific": {} 00:08:51.610 } 00:08:51.610 ] 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.610 [2024-11-27 11:47:17.826133] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:51.610 [2024-11-27 11:47:17.826292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:51.610 [2024-11-27 11:47:17.826351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.610 [2024-11-27 11:47:17.828624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.610 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.611 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.611 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.611 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.611 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.611 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.611 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.611 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.611 "name": "Existed_Raid", 00:08:51.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.611 "strip_size_kb": 64, 00:08:51.611 "state": "configuring", 00:08:51.611 "raid_level": "concat", 00:08:51.611 "superblock": false, 00:08:51.611 "num_base_bdevs": 3, 00:08:51.611 "num_base_bdevs_discovered": 2, 00:08:51.611 "num_base_bdevs_operational": 3, 00:08:51.611 "base_bdevs_list": [ 00:08:51.611 { 00:08:51.611 "name": "BaseBdev1", 00:08:51.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.611 "is_configured": false, 00:08:51.611 "data_offset": 0, 00:08:51.611 "data_size": 0 00:08:51.611 }, 00:08:51.611 { 00:08:51.611 "name": "BaseBdev2", 00:08:51.611 "uuid": "59d4936e-88cb-476f-8e6b-9cc91bd8180a", 00:08:51.611 "is_configured": true, 00:08:51.611 "data_offset": 0, 00:08:51.611 "data_size": 65536 00:08:51.611 }, 00:08:51.611 { 00:08:51.611 "name": "BaseBdev3", 00:08:51.611 "uuid": "37da8b51-2075-4e63-993a-c1b18b7a8fe7", 00:08:51.611 "is_configured": true, 00:08:51.611 "data_offset": 0, 00:08:51.611 "data_size": 65536 00:08:51.611 } 00:08:51.611 ] 00:08:51.611 }' 00:08:51.611 11:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.611 11:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.175 [2024-11-27 11:47:18.289478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.175 "name": "Existed_Raid", 00:08:52.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.175 "strip_size_kb": 64, 00:08:52.175 "state": "configuring", 00:08:52.175 "raid_level": "concat", 00:08:52.175 "superblock": false, 00:08:52.175 "num_base_bdevs": 3, 00:08:52.175 "num_base_bdevs_discovered": 1, 00:08:52.175 "num_base_bdevs_operational": 3, 00:08:52.175 "base_bdevs_list": [ 00:08:52.175 { 00:08:52.175 "name": "BaseBdev1", 00:08:52.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.175 "is_configured": false, 00:08:52.175 "data_offset": 0, 00:08:52.175 "data_size": 0 00:08:52.175 }, 00:08:52.175 { 00:08:52.175 "name": null, 00:08:52.175 "uuid": "59d4936e-88cb-476f-8e6b-9cc91bd8180a", 00:08:52.175 "is_configured": false, 00:08:52.175 "data_offset": 0, 00:08:52.175 "data_size": 65536 00:08:52.175 }, 00:08:52.175 { 00:08:52.175 "name": "BaseBdev3", 00:08:52.175 "uuid": "37da8b51-2075-4e63-993a-c1b18b7a8fe7", 00:08:52.175 "is_configured": true, 00:08:52.175 "data_offset": 0, 00:08:52.175 "data_size": 65536 00:08:52.175 } 00:08:52.175 ] 00:08:52.175 }' 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.175 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.433 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.433 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.433 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.433 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:52.433 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.433 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:52.433 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:52.433 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.433 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.433 [2024-11-27 11:47:18.754895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.433 BaseBdev1 00:08:52.433 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.433 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:52.433 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:52.433 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:52.433 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:52.433 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:52.433 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:52.433 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:52.433 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.434 [ 00:08:52.434 { 00:08:52.434 "name": "BaseBdev1", 00:08:52.434 "aliases": [ 00:08:52.434 "b46e4c9e-0784-428b-aa6a-fd28d79e4f20" 00:08:52.434 ], 00:08:52.434 "product_name": "Malloc disk", 00:08:52.434 "block_size": 512, 00:08:52.434 "num_blocks": 65536, 00:08:52.434 "uuid": "b46e4c9e-0784-428b-aa6a-fd28d79e4f20", 00:08:52.434 "assigned_rate_limits": { 00:08:52.434 "rw_ios_per_sec": 0, 00:08:52.434 "rw_mbytes_per_sec": 0, 00:08:52.434 "r_mbytes_per_sec": 0, 00:08:52.434 "w_mbytes_per_sec": 0 00:08:52.434 }, 00:08:52.434 "claimed": true, 00:08:52.434 "claim_type": "exclusive_write", 00:08:52.434 "zoned": false, 00:08:52.434 "supported_io_types": { 00:08:52.434 "read": true, 00:08:52.434 "write": true, 00:08:52.434 "unmap": true, 00:08:52.434 "flush": true, 00:08:52.434 "reset": true, 00:08:52.434 "nvme_admin": false, 00:08:52.434 "nvme_io": false, 00:08:52.434 "nvme_io_md": false, 00:08:52.434 "write_zeroes": true, 00:08:52.434 "zcopy": true, 00:08:52.434 "get_zone_info": false, 00:08:52.434 "zone_management": false, 00:08:52.434 "zone_append": false, 00:08:52.434 "compare": false, 00:08:52.434 "compare_and_write": false, 00:08:52.434 "abort": true, 00:08:52.434 "seek_hole": false, 00:08:52.434 "seek_data": false, 00:08:52.434 "copy": true, 00:08:52.434 "nvme_iov_md": false 00:08:52.434 }, 00:08:52.434 "memory_domains": [ 00:08:52.434 { 00:08:52.434 "dma_device_id": "system", 00:08:52.434 "dma_device_type": 1 00:08:52.434 }, 00:08:52.434 { 00:08:52.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.434 "dma_device_type": 2 00:08:52.434 } 00:08:52.434 ], 00:08:52.434 "driver_specific": {} 00:08:52.434 } 00:08:52.434 ] 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.434 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.692 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.692 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.692 "name": "Existed_Raid", 00:08:52.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.692 "strip_size_kb": 64, 00:08:52.692 "state": "configuring", 00:08:52.692 "raid_level": "concat", 00:08:52.692 "superblock": false, 00:08:52.692 "num_base_bdevs": 3, 00:08:52.692 "num_base_bdevs_discovered": 2, 00:08:52.692 "num_base_bdevs_operational": 3, 00:08:52.692 "base_bdevs_list": [ 00:08:52.692 { 00:08:52.692 "name": "BaseBdev1", 00:08:52.692 "uuid": "b46e4c9e-0784-428b-aa6a-fd28d79e4f20", 00:08:52.692 "is_configured": true, 00:08:52.692 "data_offset": 0, 00:08:52.692 "data_size": 65536 00:08:52.692 }, 00:08:52.692 { 00:08:52.692 "name": null, 00:08:52.692 "uuid": "59d4936e-88cb-476f-8e6b-9cc91bd8180a", 00:08:52.692 "is_configured": false, 00:08:52.692 "data_offset": 0, 00:08:52.692 "data_size": 65536 00:08:52.692 }, 00:08:52.692 { 00:08:52.692 "name": "BaseBdev3", 00:08:52.692 "uuid": "37da8b51-2075-4e63-993a-c1b18b7a8fe7", 00:08:52.692 "is_configured": true, 00:08:52.692 "data_offset": 0, 00:08:52.692 "data_size": 65536 00:08:52.692 } 00:08:52.692 ] 00:08:52.692 }' 00:08:52.692 11:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.692 11:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.952 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.952 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:52.952 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.952 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.952 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.952 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:52.952 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:52.952 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.952 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.241 [2024-11-27 11:47:19.334013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:53.241 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.241 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.241 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.241 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.241 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.241 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.241 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.242 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.242 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.242 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.242 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.242 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.242 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.242 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.242 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.242 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.242 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.242 "name": "Existed_Raid", 00:08:53.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.242 "strip_size_kb": 64, 00:08:53.242 "state": "configuring", 00:08:53.242 "raid_level": "concat", 00:08:53.242 "superblock": false, 00:08:53.242 "num_base_bdevs": 3, 00:08:53.242 "num_base_bdevs_discovered": 1, 00:08:53.242 "num_base_bdevs_operational": 3, 00:08:53.242 "base_bdevs_list": [ 00:08:53.242 { 00:08:53.242 "name": "BaseBdev1", 00:08:53.242 "uuid": "b46e4c9e-0784-428b-aa6a-fd28d79e4f20", 00:08:53.242 "is_configured": true, 00:08:53.242 "data_offset": 0, 00:08:53.242 "data_size": 65536 00:08:53.242 }, 00:08:53.242 { 00:08:53.242 "name": null, 00:08:53.242 "uuid": "59d4936e-88cb-476f-8e6b-9cc91bd8180a", 00:08:53.242 "is_configured": false, 00:08:53.242 "data_offset": 0, 00:08:53.242 "data_size": 65536 00:08:53.242 }, 00:08:53.242 { 00:08:53.242 "name": null, 00:08:53.242 "uuid": "37da8b51-2075-4e63-993a-c1b18b7a8fe7", 00:08:53.242 "is_configured": false, 00:08:53.242 "data_offset": 0, 00:08:53.242 "data_size": 65536 00:08:53.242 } 00:08:53.242 ] 00:08:53.242 }' 00:08:53.242 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.242 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.501 [2024-11-27 11:47:19.817281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.501 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.501 "name": "Existed_Raid", 00:08:53.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.502 "strip_size_kb": 64, 00:08:53.502 "state": "configuring", 00:08:53.502 "raid_level": "concat", 00:08:53.502 "superblock": false, 00:08:53.502 "num_base_bdevs": 3, 00:08:53.502 "num_base_bdevs_discovered": 2, 00:08:53.502 "num_base_bdevs_operational": 3, 00:08:53.502 "base_bdevs_list": [ 00:08:53.502 { 00:08:53.502 "name": "BaseBdev1", 00:08:53.502 "uuid": "b46e4c9e-0784-428b-aa6a-fd28d79e4f20", 00:08:53.502 "is_configured": true, 00:08:53.502 "data_offset": 0, 00:08:53.502 "data_size": 65536 00:08:53.502 }, 00:08:53.502 { 00:08:53.502 "name": null, 00:08:53.502 "uuid": "59d4936e-88cb-476f-8e6b-9cc91bd8180a", 00:08:53.502 "is_configured": false, 00:08:53.502 "data_offset": 0, 00:08:53.502 "data_size": 65536 00:08:53.502 }, 00:08:53.502 { 00:08:53.502 "name": "BaseBdev3", 00:08:53.502 "uuid": "37da8b51-2075-4e63-993a-c1b18b7a8fe7", 00:08:53.502 "is_configured": true, 00:08:53.502 "data_offset": 0, 00:08:53.502 "data_size": 65536 00:08:53.502 } 00:08:53.502 ] 00:08:53.502 }' 00:08:53.502 11:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.502 11:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.069 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.069 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:54.069 11:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.069 11:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.069 11:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.069 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:54.069 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:54.069 11:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.069 11:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.070 [2024-11-27 11:47:20.352428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:54.328 11:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.328 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.328 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.328 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.328 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.328 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.328 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.328 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.328 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.328 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.328 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.328 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.328 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.328 11:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.328 11:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.328 11:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.328 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.328 "name": "Existed_Raid", 00:08:54.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.328 "strip_size_kb": 64, 00:08:54.328 "state": "configuring", 00:08:54.328 "raid_level": "concat", 00:08:54.328 "superblock": false, 00:08:54.329 "num_base_bdevs": 3, 00:08:54.329 "num_base_bdevs_discovered": 1, 00:08:54.329 "num_base_bdevs_operational": 3, 00:08:54.329 "base_bdevs_list": [ 00:08:54.329 { 00:08:54.329 "name": null, 00:08:54.329 "uuid": "b46e4c9e-0784-428b-aa6a-fd28d79e4f20", 00:08:54.329 "is_configured": false, 00:08:54.329 "data_offset": 0, 00:08:54.329 "data_size": 65536 00:08:54.329 }, 00:08:54.329 { 00:08:54.329 "name": null, 00:08:54.329 "uuid": "59d4936e-88cb-476f-8e6b-9cc91bd8180a", 00:08:54.329 "is_configured": false, 00:08:54.329 "data_offset": 0, 00:08:54.329 "data_size": 65536 00:08:54.329 }, 00:08:54.329 { 00:08:54.329 "name": "BaseBdev3", 00:08:54.329 "uuid": "37da8b51-2075-4e63-993a-c1b18b7a8fe7", 00:08:54.329 "is_configured": true, 00:08:54.329 "data_offset": 0, 00:08:54.329 "data_size": 65536 00:08:54.329 } 00:08:54.329 ] 00:08:54.329 }' 00:08:54.329 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.329 11:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.587 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.587 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:54.587 11:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.587 11:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.587 11:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.845 [2024-11-27 11:47:20.978670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.845 11:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.845 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.845 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.845 "name": "Existed_Raid", 00:08:54.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.845 "strip_size_kb": 64, 00:08:54.845 "state": "configuring", 00:08:54.845 "raid_level": "concat", 00:08:54.845 "superblock": false, 00:08:54.845 "num_base_bdevs": 3, 00:08:54.845 "num_base_bdevs_discovered": 2, 00:08:54.845 "num_base_bdevs_operational": 3, 00:08:54.845 "base_bdevs_list": [ 00:08:54.845 { 00:08:54.845 "name": null, 00:08:54.845 "uuid": "b46e4c9e-0784-428b-aa6a-fd28d79e4f20", 00:08:54.845 "is_configured": false, 00:08:54.845 "data_offset": 0, 00:08:54.845 "data_size": 65536 00:08:54.845 }, 00:08:54.845 { 00:08:54.845 "name": "BaseBdev2", 00:08:54.845 "uuid": "59d4936e-88cb-476f-8e6b-9cc91bd8180a", 00:08:54.845 "is_configured": true, 00:08:54.845 "data_offset": 0, 00:08:54.845 "data_size": 65536 00:08:54.845 }, 00:08:54.845 { 00:08:54.845 "name": "BaseBdev3", 00:08:54.845 "uuid": "37da8b51-2075-4e63-993a-c1b18b7a8fe7", 00:08:54.845 "is_configured": true, 00:08:54.845 "data_offset": 0, 00:08:54.845 "data_size": 65536 00:08:54.845 } 00:08:54.845 ] 00:08:54.845 }' 00:08:54.845 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.845 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.105 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:55.105 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.105 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.105 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.105 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.105 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:55.364 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.364 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:55.364 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.364 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.364 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.364 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b46e4c9e-0784-428b-aa6a-fd28d79e4f20 00:08:55.364 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.364 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.364 [2024-11-27 11:47:21.580260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:55.364 [2024-11-27 11:47:21.580324] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:55.364 [2024-11-27 11:47:21.580335] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:55.364 [2024-11-27 11:47:21.580619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:55.364 [2024-11-27 11:47:21.580787] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:55.364 [2024-11-27 11:47:21.580799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:55.364 [2024-11-27 11:47:21.581167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.364 NewBaseBdev 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.365 [ 00:08:55.365 { 00:08:55.365 "name": "NewBaseBdev", 00:08:55.365 "aliases": [ 00:08:55.365 "b46e4c9e-0784-428b-aa6a-fd28d79e4f20" 00:08:55.365 ], 00:08:55.365 "product_name": "Malloc disk", 00:08:55.365 "block_size": 512, 00:08:55.365 "num_blocks": 65536, 00:08:55.365 "uuid": "b46e4c9e-0784-428b-aa6a-fd28d79e4f20", 00:08:55.365 "assigned_rate_limits": { 00:08:55.365 "rw_ios_per_sec": 0, 00:08:55.365 "rw_mbytes_per_sec": 0, 00:08:55.365 "r_mbytes_per_sec": 0, 00:08:55.365 "w_mbytes_per_sec": 0 00:08:55.365 }, 00:08:55.365 "claimed": true, 00:08:55.365 "claim_type": "exclusive_write", 00:08:55.365 "zoned": false, 00:08:55.365 "supported_io_types": { 00:08:55.365 "read": true, 00:08:55.365 "write": true, 00:08:55.365 "unmap": true, 00:08:55.365 "flush": true, 00:08:55.365 "reset": true, 00:08:55.365 "nvme_admin": false, 00:08:55.365 "nvme_io": false, 00:08:55.365 "nvme_io_md": false, 00:08:55.365 "write_zeroes": true, 00:08:55.365 "zcopy": true, 00:08:55.365 "get_zone_info": false, 00:08:55.365 "zone_management": false, 00:08:55.365 "zone_append": false, 00:08:55.365 "compare": false, 00:08:55.365 "compare_and_write": false, 00:08:55.365 "abort": true, 00:08:55.365 "seek_hole": false, 00:08:55.365 "seek_data": false, 00:08:55.365 "copy": true, 00:08:55.365 "nvme_iov_md": false 00:08:55.365 }, 00:08:55.365 "memory_domains": [ 00:08:55.365 { 00:08:55.365 "dma_device_id": "system", 00:08:55.365 "dma_device_type": 1 00:08:55.365 }, 00:08:55.365 { 00:08:55.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.365 "dma_device_type": 2 00:08:55.365 } 00:08:55.365 ], 00:08:55.365 "driver_specific": {} 00:08:55.365 } 00:08:55.365 ] 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.365 "name": "Existed_Raid", 00:08:55.365 "uuid": "7257e194-ae24-469d-b24d-f9f36a1fa860", 00:08:55.365 "strip_size_kb": 64, 00:08:55.365 "state": "online", 00:08:55.365 "raid_level": "concat", 00:08:55.365 "superblock": false, 00:08:55.365 "num_base_bdevs": 3, 00:08:55.365 "num_base_bdevs_discovered": 3, 00:08:55.365 "num_base_bdevs_operational": 3, 00:08:55.365 "base_bdevs_list": [ 00:08:55.365 { 00:08:55.365 "name": "NewBaseBdev", 00:08:55.365 "uuid": "b46e4c9e-0784-428b-aa6a-fd28d79e4f20", 00:08:55.365 "is_configured": true, 00:08:55.365 "data_offset": 0, 00:08:55.365 "data_size": 65536 00:08:55.365 }, 00:08:55.365 { 00:08:55.365 "name": "BaseBdev2", 00:08:55.365 "uuid": "59d4936e-88cb-476f-8e6b-9cc91bd8180a", 00:08:55.365 "is_configured": true, 00:08:55.365 "data_offset": 0, 00:08:55.365 "data_size": 65536 00:08:55.365 }, 00:08:55.365 { 00:08:55.365 "name": "BaseBdev3", 00:08:55.365 "uuid": "37da8b51-2075-4e63-993a-c1b18b7a8fe7", 00:08:55.365 "is_configured": true, 00:08:55.365 "data_offset": 0, 00:08:55.365 "data_size": 65536 00:08:55.365 } 00:08:55.365 ] 00:08:55.365 }' 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.365 11:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.935 [2024-11-27 11:47:22.112026] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:55.935 "name": "Existed_Raid", 00:08:55.935 "aliases": [ 00:08:55.935 "7257e194-ae24-469d-b24d-f9f36a1fa860" 00:08:55.935 ], 00:08:55.935 "product_name": "Raid Volume", 00:08:55.935 "block_size": 512, 00:08:55.935 "num_blocks": 196608, 00:08:55.935 "uuid": "7257e194-ae24-469d-b24d-f9f36a1fa860", 00:08:55.935 "assigned_rate_limits": { 00:08:55.935 "rw_ios_per_sec": 0, 00:08:55.935 "rw_mbytes_per_sec": 0, 00:08:55.935 "r_mbytes_per_sec": 0, 00:08:55.935 "w_mbytes_per_sec": 0 00:08:55.935 }, 00:08:55.935 "claimed": false, 00:08:55.935 "zoned": false, 00:08:55.935 "supported_io_types": { 00:08:55.935 "read": true, 00:08:55.935 "write": true, 00:08:55.935 "unmap": true, 00:08:55.935 "flush": true, 00:08:55.935 "reset": true, 00:08:55.935 "nvme_admin": false, 00:08:55.935 "nvme_io": false, 00:08:55.935 "nvme_io_md": false, 00:08:55.935 "write_zeroes": true, 00:08:55.935 "zcopy": false, 00:08:55.935 "get_zone_info": false, 00:08:55.935 "zone_management": false, 00:08:55.935 "zone_append": false, 00:08:55.935 "compare": false, 00:08:55.935 "compare_and_write": false, 00:08:55.935 "abort": false, 00:08:55.935 "seek_hole": false, 00:08:55.935 "seek_data": false, 00:08:55.935 "copy": false, 00:08:55.935 "nvme_iov_md": false 00:08:55.935 }, 00:08:55.935 "memory_domains": [ 00:08:55.935 { 00:08:55.935 "dma_device_id": "system", 00:08:55.935 "dma_device_type": 1 00:08:55.935 }, 00:08:55.935 { 00:08:55.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.935 "dma_device_type": 2 00:08:55.935 }, 00:08:55.935 { 00:08:55.935 "dma_device_id": "system", 00:08:55.935 "dma_device_type": 1 00:08:55.935 }, 00:08:55.935 { 00:08:55.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.935 "dma_device_type": 2 00:08:55.935 }, 00:08:55.935 { 00:08:55.935 "dma_device_id": "system", 00:08:55.935 "dma_device_type": 1 00:08:55.935 }, 00:08:55.935 { 00:08:55.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.935 "dma_device_type": 2 00:08:55.935 } 00:08:55.935 ], 00:08:55.935 "driver_specific": { 00:08:55.935 "raid": { 00:08:55.935 "uuid": "7257e194-ae24-469d-b24d-f9f36a1fa860", 00:08:55.935 "strip_size_kb": 64, 00:08:55.935 "state": "online", 00:08:55.935 "raid_level": "concat", 00:08:55.935 "superblock": false, 00:08:55.935 "num_base_bdevs": 3, 00:08:55.935 "num_base_bdevs_discovered": 3, 00:08:55.935 "num_base_bdevs_operational": 3, 00:08:55.935 "base_bdevs_list": [ 00:08:55.935 { 00:08:55.935 "name": "NewBaseBdev", 00:08:55.935 "uuid": "b46e4c9e-0784-428b-aa6a-fd28d79e4f20", 00:08:55.935 "is_configured": true, 00:08:55.935 "data_offset": 0, 00:08:55.935 "data_size": 65536 00:08:55.935 }, 00:08:55.935 { 00:08:55.935 "name": "BaseBdev2", 00:08:55.935 "uuid": "59d4936e-88cb-476f-8e6b-9cc91bd8180a", 00:08:55.935 "is_configured": true, 00:08:55.935 "data_offset": 0, 00:08:55.935 "data_size": 65536 00:08:55.935 }, 00:08:55.935 { 00:08:55.935 "name": "BaseBdev3", 00:08:55.935 "uuid": "37da8b51-2075-4e63-993a-c1b18b7a8fe7", 00:08:55.935 "is_configured": true, 00:08:55.935 "data_offset": 0, 00:08:55.935 "data_size": 65536 00:08:55.935 } 00:08:55.935 ] 00:08:55.935 } 00:08:55.935 } 00:08:55.935 }' 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:55.935 BaseBdev2 00:08:55.935 BaseBdev3' 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.935 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.195 [2024-11-27 11:47:22.399141] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:56.195 [2024-11-27 11:47:22.399176] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.195 [2024-11-27 11:47:22.399284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.195 [2024-11-27 11:47:22.399345] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.195 [2024-11-27 11:47:22.399360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65576 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65576 ']' 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65576 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65576 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.195 killing process with pid 65576 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65576' 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65576 00:08:56.195 [2024-11-27 11:47:22.444350] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.195 11:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65576 00:08:56.454 [2024-11-27 11:47:22.792364] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:57.884 00:08:57.884 real 0m11.391s 00:08:57.884 user 0m18.081s 00:08:57.884 sys 0m1.853s 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.884 ************************************ 00:08:57.884 END TEST raid_state_function_test 00:08:57.884 ************************************ 00:08:57.884 11:47:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:57.884 11:47:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:57.884 11:47:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.884 11:47:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.884 ************************************ 00:08:57.884 START TEST raid_state_function_test_sb 00:08:57.884 ************************************ 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66204 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66204' 00:08:57.884 Process raid pid: 66204 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66204 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66204 ']' 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.884 11:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.884 [2024-11-27 11:47:24.238908] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:08:57.884 [2024-11-27 11:47:24.239123] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.143 [2024-11-27 11:47:24.402524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.402 [2024-11-27 11:47:24.535772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.402 [2024-11-27 11:47:24.772967] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.402 [2024-11-27 11:47:24.773134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.983 [2024-11-27 11:47:25.165959] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.983 [2024-11-27 11:47:25.166119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.983 [2024-11-27 11:47:25.166138] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.983 [2024-11-27 11:47:25.166150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.983 [2024-11-27 11:47:25.166158] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.983 [2024-11-27 11:47:25.166169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.983 "name": "Existed_Raid", 00:08:58.983 "uuid": "5453a9e8-fe88-4a36-a15d-71c2f97b863c", 00:08:58.983 "strip_size_kb": 64, 00:08:58.983 "state": "configuring", 00:08:58.983 "raid_level": "concat", 00:08:58.983 "superblock": true, 00:08:58.983 "num_base_bdevs": 3, 00:08:58.983 "num_base_bdevs_discovered": 0, 00:08:58.983 "num_base_bdevs_operational": 3, 00:08:58.983 "base_bdevs_list": [ 00:08:58.983 { 00:08:58.983 "name": "BaseBdev1", 00:08:58.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.983 "is_configured": false, 00:08:58.983 "data_offset": 0, 00:08:58.983 "data_size": 0 00:08:58.983 }, 00:08:58.983 { 00:08:58.983 "name": "BaseBdev2", 00:08:58.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.983 "is_configured": false, 00:08:58.983 "data_offset": 0, 00:08:58.983 "data_size": 0 00:08:58.983 }, 00:08:58.983 { 00:08:58.983 "name": "BaseBdev3", 00:08:58.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.983 "is_configured": false, 00:08:58.983 "data_offset": 0, 00:08:58.983 "data_size": 0 00:08:58.983 } 00:08:58.983 ] 00:08:58.983 }' 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.983 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 [2024-11-27 11:47:25.669035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.549 [2024-11-27 11:47:25.669158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 [2024-11-27 11:47:25.681069] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:59.549 [2024-11-27 11:47:25.681209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:59.549 [2024-11-27 11:47:25.681246] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.549 [2024-11-27 11:47:25.681273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.549 [2024-11-27 11:47:25.681324] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:59.549 [2024-11-27 11:47:25.681352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 [2024-11-27 11:47:25.735450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.549 BaseBdev1 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 [ 00:08:59.549 { 00:08:59.549 "name": "BaseBdev1", 00:08:59.549 "aliases": [ 00:08:59.549 "792c6955-1ca3-43f1-8648-22eec36f7463" 00:08:59.549 ], 00:08:59.549 "product_name": "Malloc disk", 00:08:59.549 "block_size": 512, 00:08:59.549 "num_blocks": 65536, 00:08:59.549 "uuid": "792c6955-1ca3-43f1-8648-22eec36f7463", 00:08:59.549 "assigned_rate_limits": { 00:08:59.549 "rw_ios_per_sec": 0, 00:08:59.549 "rw_mbytes_per_sec": 0, 00:08:59.549 "r_mbytes_per_sec": 0, 00:08:59.549 "w_mbytes_per_sec": 0 00:08:59.549 }, 00:08:59.549 "claimed": true, 00:08:59.549 "claim_type": "exclusive_write", 00:08:59.549 "zoned": false, 00:08:59.549 "supported_io_types": { 00:08:59.549 "read": true, 00:08:59.549 "write": true, 00:08:59.549 "unmap": true, 00:08:59.549 "flush": true, 00:08:59.549 "reset": true, 00:08:59.549 "nvme_admin": false, 00:08:59.549 "nvme_io": false, 00:08:59.549 "nvme_io_md": false, 00:08:59.549 "write_zeroes": true, 00:08:59.549 "zcopy": true, 00:08:59.549 "get_zone_info": false, 00:08:59.549 "zone_management": false, 00:08:59.549 "zone_append": false, 00:08:59.549 "compare": false, 00:08:59.549 "compare_and_write": false, 00:08:59.549 "abort": true, 00:08:59.549 "seek_hole": false, 00:08:59.549 "seek_data": false, 00:08:59.549 "copy": true, 00:08:59.549 "nvme_iov_md": false 00:08:59.549 }, 00:08:59.549 "memory_domains": [ 00:08:59.549 { 00:08:59.549 "dma_device_id": "system", 00:08:59.549 "dma_device_type": 1 00:08:59.549 }, 00:08:59.549 { 00:08:59.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.549 "dma_device_type": 2 00:08:59.549 } 00:08:59.549 ], 00:08:59.549 "driver_specific": {} 00:08:59.549 } 00:08:59.549 ] 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.549 "name": "Existed_Raid", 00:08:59.549 "uuid": "6a32f404-5815-44e4-9e2e-9a4315bc8085", 00:08:59.549 "strip_size_kb": 64, 00:08:59.549 "state": "configuring", 00:08:59.549 "raid_level": "concat", 00:08:59.549 "superblock": true, 00:08:59.549 "num_base_bdevs": 3, 00:08:59.549 "num_base_bdevs_discovered": 1, 00:08:59.549 "num_base_bdevs_operational": 3, 00:08:59.549 "base_bdevs_list": [ 00:08:59.549 { 00:08:59.549 "name": "BaseBdev1", 00:08:59.549 "uuid": "792c6955-1ca3-43f1-8648-22eec36f7463", 00:08:59.549 "is_configured": true, 00:08:59.549 "data_offset": 2048, 00:08:59.549 "data_size": 63488 00:08:59.549 }, 00:08:59.549 { 00:08:59.549 "name": "BaseBdev2", 00:08:59.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.549 "is_configured": false, 00:08:59.549 "data_offset": 0, 00:08:59.549 "data_size": 0 00:08:59.549 }, 00:08:59.549 { 00:08:59.549 "name": "BaseBdev3", 00:08:59.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.549 "is_configured": false, 00:08:59.549 "data_offset": 0, 00:08:59.549 "data_size": 0 00:08:59.549 } 00:08:59.549 ] 00:08:59.549 }' 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.549 11:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.115 [2024-11-27 11:47:26.238682] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:00.115 [2024-11-27 11:47:26.238761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.115 [2024-11-27 11:47:26.250773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.115 [2024-11-27 11:47:26.253092] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:00.115 [2024-11-27 11:47:26.253154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:00.115 [2024-11-27 11:47:26.253168] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:00.115 [2024-11-27 11:47:26.253179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.115 "name": "Existed_Raid", 00:09:00.115 "uuid": "c1f2b912-9fc5-4cd5-a35d-a8692315a3cd", 00:09:00.115 "strip_size_kb": 64, 00:09:00.115 "state": "configuring", 00:09:00.115 "raid_level": "concat", 00:09:00.115 "superblock": true, 00:09:00.115 "num_base_bdevs": 3, 00:09:00.115 "num_base_bdevs_discovered": 1, 00:09:00.115 "num_base_bdevs_operational": 3, 00:09:00.115 "base_bdevs_list": [ 00:09:00.115 { 00:09:00.115 "name": "BaseBdev1", 00:09:00.115 "uuid": "792c6955-1ca3-43f1-8648-22eec36f7463", 00:09:00.115 "is_configured": true, 00:09:00.115 "data_offset": 2048, 00:09:00.115 "data_size": 63488 00:09:00.115 }, 00:09:00.115 { 00:09:00.115 "name": "BaseBdev2", 00:09:00.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.115 "is_configured": false, 00:09:00.115 "data_offset": 0, 00:09:00.115 "data_size": 0 00:09:00.115 }, 00:09:00.115 { 00:09:00.115 "name": "BaseBdev3", 00:09:00.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.115 "is_configured": false, 00:09:00.115 "data_offset": 0, 00:09:00.115 "data_size": 0 00:09:00.115 } 00:09:00.115 ] 00:09:00.115 }' 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.115 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.373 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:00.373 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.373 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.373 [2024-11-27 11:47:26.749435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.373 BaseBdev2 00:09:00.373 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.373 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:00.373 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:00.373 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.373 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:00.373 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.374 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.374 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.374 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.374 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.632 [ 00:09:00.632 { 00:09:00.632 "name": "BaseBdev2", 00:09:00.632 "aliases": [ 00:09:00.632 "532af13e-db95-456b-80f4-26e140ae47cd" 00:09:00.632 ], 00:09:00.632 "product_name": "Malloc disk", 00:09:00.632 "block_size": 512, 00:09:00.632 "num_blocks": 65536, 00:09:00.632 "uuid": "532af13e-db95-456b-80f4-26e140ae47cd", 00:09:00.632 "assigned_rate_limits": { 00:09:00.632 "rw_ios_per_sec": 0, 00:09:00.632 "rw_mbytes_per_sec": 0, 00:09:00.632 "r_mbytes_per_sec": 0, 00:09:00.632 "w_mbytes_per_sec": 0 00:09:00.632 }, 00:09:00.632 "claimed": true, 00:09:00.632 "claim_type": "exclusive_write", 00:09:00.632 "zoned": false, 00:09:00.632 "supported_io_types": { 00:09:00.632 "read": true, 00:09:00.632 "write": true, 00:09:00.632 "unmap": true, 00:09:00.632 "flush": true, 00:09:00.632 "reset": true, 00:09:00.632 "nvme_admin": false, 00:09:00.632 "nvme_io": false, 00:09:00.632 "nvme_io_md": false, 00:09:00.632 "write_zeroes": true, 00:09:00.632 "zcopy": true, 00:09:00.632 "get_zone_info": false, 00:09:00.632 "zone_management": false, 00:09:00.632 "zone_append": false, 00:09:00.632 "compare": false, 00:09:00.632 "compare_and_write": false, 00:09:00.632 "abort": true, 00:09:00.632 "seek_hole": false, 00:09:00.632 "seek_data": false, 00:09:00.632 "copy": true, 00:09:00.632 "nvme_iov_md": false 00:09:00.632 }, 00:09:00.632 "memory_domains": [ 00:09:00.632 { 00:09:00.632 "dma_device_id": "system", 00:09:00.632 "dma_device_type": 1 00:09:00.632 }, 00:09:00.632 { 00:09:00.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.632 "dma_device_type": 2 00:09:00.632 } 00:09:00.632 ], 00:09:00.632 "driver_specific": {} 00:09:00.632 } 00:09:00.632 ] 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.632 "name": "Existed_Raid", 00:09:00.632 "uuid": "c1f2b912-9fc5-4cd5-a35d-a8692315a3cd", 00:09:00.632 "strip_size_kb": 64, 00:09:00.632 "state": "configuring", 00:09:00.632 "raid_level": "concat", 00:09:00.632 "superblock": true, 00:09:00.632 "num_base_bdevs": 3, 00:09:00.632 "num_base_bdevs_discovered": 2, 00:09:00.632 "num_base_bdevs_operational": 3, 00:09:00.632 "base_bdevs_list": [ 00:09:00.632 { 00:09:00.632 "name": "BaseBdev1", 00:09:00.632 "uuid": "792c6955-1ca3-43f1-8648-22eec36f7463", 00:09:00.632 "is_configured": true, 00:09:00.632 "data_offset": 2048, 00:09:00.632 "data_size": 63488 00:09:00.632 }, 00:09:00.632 { 00:09:00.632 "name": "BaseBdev2", 00:09:00.632 "uuid": "532af13e-db95-456b-80f4-26e140ae47cd", 00:09:00.632 "is_configured": true, 00:09:00.632 "data_offset": 2048, 00:09:00.632 "data_size": 63488 00:09:00.632 }, 00:09:00.632 { 00:09:00.632 "name": "BaseBdev3", 00:09:00.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.632 "is_configured": false, 00:09:00.632 "data_offset": 0, 00:09:00.632 "data_size": 0 00:09:00.632 } 00:09:00.632 ] 00:09:00.632 }' 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.632 11:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.891 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:00.891 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.891 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.149 [2024-11-27 11:47:27.316295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.149 [2024-11-27 11:47:27.316599] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:01.149 [2024-11-27 11:47:27.316628] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:01.149 [2024-11-27 11:47:27.316960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:01.149 BaseBdev3 00:09:01.149 [2024-11-27 11:47:27.317168] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:01.149 [2024-11-27 11:47:27.317187] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:01.149 [2024-11-27 11:47:27.317373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.149 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.149 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:01.149 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:01.149 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.149 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:01.149 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.149 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.149 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:01.149 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.149 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.149 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.149 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:01.149 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.149 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.149 [ 00:09:01.149 { 00:09:01.149 "name": "BaseBdev3", 00:09:01.149 "aliases": [ 00:09:01.149 "aacb4157-5ea4-42c4-89b8-bcc3abf962e0" 00:09:01.149 ], 00:09:01.149 "product_name": "Malloc disk", 00:09:01.149 "block_size": 512, 00:09:01.149 "num_blocks": 65536, 00:09:01.149 "uuid": "aacb4157-5ea4-42c4-89b8-bcc3abf962e0", 00:09:01.149 "assigned_rate_limits": { 00:09:01.149 "rw_ios_per_sec": 0, 00:09:01.149 "rw_mbytes_per_sec": 0, 00:09:01.149 "r_mbytes_per_sec": 0, 00:09:01.149 "w_mbytes_per_sec": 0 00:09:01.149 }, 00:09:01.149 "claimed": true, 00:09:01.149 "claim_type": "exclusive_write", 00:09:01.149 "zoned": false, 00:09:01.149 "supported_io_types": { 00:09:01.149 "read": true, 00:09:01.149 "write": true, 00:09:01.149 "unmap": true, 00:09:01.149 "flush": true, 00:09:01.149 "reset": true, 00:09:01.149 "nvme_admin": false, 00:09:01.149 "nvme_io": false, 00:09:01.149 "nvme_io_md": false, 00:09:01.149 "write_zeroes": true, 00:09:01.149 "zcopy": true, 00:09:01.149 "get_zone_info": false, 00:09:01.149 "zone_management": false, 00:09:01.149 "zone_append": false, 00:09:01.149 "compare": false, 00:09:01.149 "compare_and_write": false, 00:09:01.149 "abort": true, 00:09:01.149 "seek_hole": false, 00:09:01.149 "seek_data": false, 00:09:01.149 "copy": true, 00:09:01.149 "nvme_iov_md": false 00:09:01.149 }, 00:09:01.149 "memory_domains": [ 00:09:01.149 { 00:09:01.149 "dma_device_id": "system", 00:09:01.149 "dma_device_type": 1 00:09:01.149 }, 00:09:01.149 { 00:09:01.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.150 "dma_device_type": 2 00:09:01.150 } 00:09:01.150 ], 00:09:01.150 "driver_specific": {} 00:09:01.150 } 00:09:01.150 ] 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.150 "name": "Existed_Raid", 00:09:01.150 "uuid": "c1f2b912-9fc5-4cd5-a35d-a8692315a3cd", 00:09:01.150 "strip_size_kb": 64, 00:09:01.150 "state": "online", 00:09:01.150 "raid_level": "concat", 00:09:01.150 "superblock": true, 00:09:01.150 "num_base_bdevs": 3, 00:09:01.150 "num_base_bdevs_discovered": 3, 00:09:01.150 "num_base_bdevs_operational": 3, 00:09:01.150 "base_bdevs_list": [ 00:09:01.150 { 00:09:01.150 "name": "BaseBdev1", 00:09:01.150 "uuid": "792c6955-1ca3-43f1-8648-22eec36f7463", 00:09:01.150 "is_configured": true, 00:09:01.150 "data_offset": 2048, 00:09:01.150 "data_size": 63488 00:09:01.150 }, 00:09:01.150 { 00:09:01.150 "name": "BaseBdev2", 00:09:01.150 "uuid": "532af13e-db95-456b-80f4-26e140ae47cd", 00:09:01.150 "is_configured": true, 00:09:01.150 "data_offset": 2048, 00:09:01.150 "data_size": 63488 00:09:01.150 }, 00:09:01.150 { 00:09:01.150 "name": "BaseBdev3", 00:09:01.150 "uuid": "aacb4157-5ea4-42c4-89b8-bcc3abf962e0", 00:09:01.150 "is_configured": true, 00:09:01.150 "data_offset": 2048, 00:09:01.150 "data_size": 63488 00:09:01.150 } 00:09:01.150 ] 00:09:01.150 }' 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.150 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:01.715 [2024-11-27 11:47:27.808097] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:01.715 "name": "Existed_Raid", 00:09:01.715 "aliases": [ 00:09:01.715 "c1f2b912-9fc5-4cd5-a35d-a8692315a3cd" 00:09:01.715 ], 00:09:01.715 "product_name": "Raid Volume", 00:09:01.715 "block_size": 512, 00:09:01.715 "num_blocks": 190464, 00:09:01.715 "uuid": "c1f2b912-9fc5-4cd5-a35d-a8692315a3cd", 00:09:01.715 "assigned_rate_limits": { 00:09:01.715 "rw_ios_per_sec": 0, 00:09:01.715 "rw_mbytes_per_sec": 0, 00:09:01.715 "r_mbytes_per_sec": 0, 00:09:01.715 "w_mbytes_per_sec": 0 00:09:01.715 }, 00:09:01.715 "claimed": false, 00:09:01.715 "zoned": false, 00:09:01.715 "supported_io_types": { 00:09:01.715 "read": true, 00:09:01.715 "write": true, 00:09:01.715 "unmap": true, 00:09:01.715 "flush": true, 00:09:01.715 "reset": true, 00:09:01.715 "nvme_admin": false, 00:09:01.715 "nvme_io": false, 00:09:01.715 "nvme_io_md": false, 00:09:01.715 "write_zeroes": true, 00:09:01.715 "zcopy": false, 00:09:01.715 "get_zone_info": false, 00:09:01.715 "zone_management": false, 00:09:01.715 "zone_append": false, 00:09:01.715 "compare": false, 00:09:01.715 "compare_and_write": false, 00:09:01.715 "abort": false, 00:09:01.715 "seek_hole": false, 00:09:01.715 "seek_data": false, 00:09:01.715 "copy": false, 00:09:01.715 "nvme_iov_md": false 00:09:01.715 }, 00:09:01.715 "memory_domains": [ 00:09:01.715 { 00:09:01.715 "dma_device_id": "system", 00:09:01.715 "dma_device_type": 1 00:09:01.715 }, 00:09:01.715 { 00:09:01.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.715 "dma_device_type": 2 00:09:01.715 }, 00:09:01.715 { 00:09:01.715 "dma_device_id": "system", 00:09:01.715 "dma_device_type": 1 00:09:01.715 }, 00:09:01.715 { 00:09:01.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.715 "dma_device_type": 2 00:09:01.715 }, 00:09:01.715 { 00:09:01.715 "dma_device_id": "system", 00:09:01.715 "dma_device_type": 1 00:09:01.715 }, 00:09:01.715 { 00:09:01.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.715 "dma_device_type": 2 00:09:01.715 } 00:09:01.715 ], 00:09:01.715 "driver_specific": { 00:09:01.715 "raid": { 00:09:01.715 "uuid": "c1f2b912-9fc5-4cd5-a35d-a8692315a3cd", 00:09:01.715 "strip_size_kb": 64, 00:09:01.715 "state": "online", 00:09:01.715 "raid_level": "concat", 00:09:01.715 "superblock": true, 00:09:01.715 "num_base_bdevs": 3, 00:09:01.715 "num_base_bdevs_discovered": 3, 00:09:01.715 "num_base_bdevs_operational": 3, 00:09:01.715 "base_bdevs_list": [ 00:09:01.715 { 00:09:01.715 "name": "BaseBdev1", 00:09:01.715 "uuid": "792c6955-1ca3-43f1-8648-22eec36f7463", 00:09:01.715 "is_configured": true, 00:09:01.715 "data_offset": 2048, 00:09:01.715 "data_size": 63488 00:09:01.715 }, 00:09:01.715 { 00:09:01.715 "name": "BaseBdev2", 00:09:01.715 "uuid": "532af13e-db95-456b-80f4-26e140ae47cd", 00:09:01.715 "is_configured": true, 00:09:01.715 "data_offset": 2048, 00:09:01.715 "data_size": 63488 00:09:01.715 }, 00:09:01.715 { 00:09:01.715 "name": "BaseBdev3", 00:09:01.715 "uuid": "aacb4157-5ea4-42c4-89b8-bcc3abf962e0", 00:09:01.715 "is_configured": true, 00:09:01.715 "data_offset": 2048, 00:09:01.715 "data_size": 63488 00:09:01.715 } 00:09:01.715 ] 00:09:01.715 } 00:09:01.715 } 00:09:01.715 }' 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:01.715 BaseBdev2 00:09:01.715 BaseBdev3' 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.715 11:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.715 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.715 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.716 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.716 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.716 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:01.716 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.716 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.716 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.716 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.716 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.716 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:01.716 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.716 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.716 [2024-11-27 11:47:28.067361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:01.716 [2024-11-27 11:47:28.067405] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.716 [2024-11-27 11:47:28.067471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.973 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.973 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:01.973 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:01.973 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:01.973 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:01.973 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:01.973 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:01.973 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.973 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:01.973 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.973 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.974 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:01.974 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.974 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.974 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.974 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.974 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.974 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.974 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.974 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.974 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.974 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.974 "name": "Existed_Raid", 00:09:01.974 "uuid": "c1f2b912-9fc5-4cd5-a35d-a8692315a3cd", 00:09:01.974 "strip_size_kb": 64, 00:09:01.974 "state": "offline", 00:09:01.974 "raid_level": "concat", 00:09:01.974 "superblock": true, 00:09:01.974 "num_base_bdevs": 3, 00:09:01.974 "num_base_bdevs_discovered": 2, 00:09:01.974 "num_base_bdevs_operational": 2, 00:09:01.974 "base_bdevs_list": [ 00:09:01.974 { 00:09:01.974 "name": null, 00:09:01.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.974 "is_configured": false, 00:09:01.974 "data_offset": 0, 00:09:01.974 "data_size": 63488 00:09:01.974 }, 00:09:01.974 { 00:09:01.974 "name": "BaseBdev2", 00:09:01.974 "uuid": "532af13e-db95-456b-80f4-26e140ae47cd", 00:09:01.974 "is_configured": true, 00:09:01.974 "data_offset": 2048, 00:09:01.974 "data_size": 63488 00:09:01.974 }, 00:09:01.974 { 00:09:01.974 "name": "BaseBdev3", 00:09:01.974 "uuid": "aacb4157-5ea4-42c4-89b8-bcc3abf962e0", 00:09:01.974 "is_configured": true, 00:09:01.974 "data_offset": 2048, 00:09:01.974 "data_size": 63488 00:09:01.974 } 00:09:01.974 ] 00:09:01.974 }' 00:09:01.974 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.974 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.232 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:02.232 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.232 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:02.232 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.232 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.232 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.491 [2024-11-27 11:47:28.645159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.491 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.491 [2024-11-27 11:47:28.817996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:02.491 [2024-11-27 11:47:28.818068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:02.750 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.750 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:02.750 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.750 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.750 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.750 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:02.750 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.750 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.750 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:02.750 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:02.750 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:02.750 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:02.750 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.750 11:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:02.750 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.750 11:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.750 BaseBdev2 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.750 [ 00:09:02.750 { 00:09:02.750 "name": "BaseBdev2", 00:09:02.750 "aliases": [ 00:09:02.750 "8f0c0a2c-a192-49e1-ba95-1fddf090fb7b" 00:09:02.750 ], 00:09:02.750 "product_name": "Malloc disk", 00:09:02.750 "block_size": 512, 00:09:02.750 "num_blocks": 65536, 00:09:02.750 "uuid": "8f0c0a2c-a192-49e1-ba95-1fddf090fb7b", 00:09:02.750 "assigned_rate_limits": { 00:09:02.750 "rw_ios_per_sec": 0, 00:09:02.750 "rw_mbytes_per_sec": 0, 00:09:02.750 "r_mbytes_per_sec": 0, 00:09:02.750 "w_mbytes_per_sec": 0 00:09:02.750 }, 00:09:02.750 "claimed": false, 00:09:02.750 "zoned": false, 00:09:02.750 "supported_io_types": { 00:09:02.750 "read": true, 00:09:02.750 "write": true, 00:09:02.750 "unmap": true, 00:09:02.750 "flush": true, 00:09:02.750 "reset": true, 00:09:02.750 "nvme_admin": false, 00:09:02.750 "nvme_io": false, 00:09:02.750 "nvme_io_md": false, 00:09:02.750 "write_zeroes": true, 00:09:02.750 "zcopy": true, 00:09:02.750 "get_zone_info": false, 00:09:02.750 "zone_management": false, 00:09:02.750 "zone_append": false, 00:09:02.750 "compare": false, 00:09:02.750 "compare_and_write": false, 00:09:02.750 "abort": true, 00:09:02.750 "seek_hole": false, 00:09:02.750 "seek_data": false, 00:09:02.750 "copy": true, 00:09:02.750 "nvme_iov_md": false 00:09:02.750 }, 00:09:02.750 "memory_domains": [ 00:09:02.750 { 00:09:02.750 "dma_device_id": "system", 00:09:02.750 "dma_device_type": 1 00:09:02.750 }, 00:09:02.750 { 00:09:02.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.750 "dma_device_type": 2 00:09:02.750 } 00:09:02.750 ], 00:09:02.750 "driver_specific": {} 00:09:02.750 } 00:09:02.750 ] 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.750 BaseBdev3 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:02.750 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.751 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.751 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.751 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.751 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.751 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.751 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:02.751 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.751 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.017 [ 00:09:03.017 { 00:09:03.017 "name": "BaseBdev3", 00:09:03.017 "aliases": [ 00:09:03.017 "c5439acf-7164-48a1-9fe4-179b9fddf0d4" 00:09:03.017 ], 00:09:03.017 "product_name": "Malloc disk", 00:09:03.017 "block_size": 512, 00:09:03.017 "num_blocks": 65536, 00:09:03.017 "uuid": "c5439acf-7164-48a1-9fe4-179b9fddf0d4", 00:09:03.017 "assigned_rate_limits": { 00:09:03.017 "rw_ios_per_sec": 0, 00:09:03.017 "rw_mbytes_per_sec": 0, 00:09:03.017 "r_mbytes_per_sec": 0, 00:09:03.017 "w_mbytes_per_sec": 0 00:09:03.017 }, 00:09:03.017 "claimed": false, 00:09:03.017 "zoned": false, 00:09:03.017 "supported_io_types": { 00:09:03.017 "read": true, 00:09:03.017 "write": true, 00:09:03.017 "unmap": true, 00:09:03.017 "flush": true, 00:09:03.017 "reset": true, 00:09:03.017 "nvme_admin": false, 00:09:03.017 "nvme_io": false, 00:09:03.017 "nvme_io_md": false, 00:09:03.017 "write_zeroes": true, 00:09:03.017 "zcopy": true, 00:09:03.017 "get_zone_info": false, 00:09:03.017 "zone_management": false, 00:09:03.017 "zone_append": false, 00:09:03.017 "compare": false, 00:09:03.017 "compare_and_write": false, 00:09:03.017 "abort": true, 00:09:03.017 "seek_hole": false, 00:09:03.017 "seek_data": false, 00:09:03.017 "copy": true, 00:09:03.017 "nvme_iov_md": false 00:09:03.017 }, 00:09:03.017 "memory_domains": [ 00:09:03.017 { 00:09:03.017 "dma_device_id": "system", 00:09:03.017 "dma_device_type": 1 00:09:03.017 }, 00:09:03.017 { 00:09:03.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.017 "dma_device_type": 2 00:09:03.017 } 00:09:03.017 ], 00:09:03.017 "driver_specific": {} 00:09:03.017 } 00:09:03.017 ] 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.017 [2024-11-27 11:47:29.158937] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:03.017 [2024-11-27 11:47:29.158994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:03.017 [2024-11-27 11:47:29.159026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.017 [2024-11-27 11:47:29.161178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.017 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.018 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.018 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.018 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.018 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.018 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.018 "name": "Existed_Raid", 00:09:03.018 "uuid": "4f20a841-3496-4ecd-a205-fe76b71ca96d", 00:09:03.018 "strip_size_kb": 64, 00:09:03.018 "state": "configuring", 00:09:03.018 "raid_level": "concat", 00:09:03.018 "superblock": true, 00:09:03.018 "num_base_bdevs": 3, 00:09:03.018 "num_base_bdevs_discovered": 2, 00:09:03.018 "num_base_bdevs_operational": 3, 00:09:03.018 "base_bdevs_list": [ 00:09:03.018 { 00:09:03.018 "name": "BaseBdev1", 00:09:03.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.018 "is_configured": false, 00:09:03.018 "data_offset": 0, 00:09:03.018 "data_size": 0 00:09:03.018 }, 00:09:03.018 { 00:09:03.018 "name": "BaseBdev2", 00:09:03.018 "uuid": "8f0c0a2c-a192-49e1-ba95-1fddf090fb7b", 00:09:03.018 "is_configured": true, 00:09:03.018 "data_offset": 2048, 00:09:03.018 "data_size": 63488 00:09:03.018 }, 00:09:03.018 { 00:09:03.018 "name": "BaseBdev3", 00:09:03.018 "uuid": "c5439acf-7164-48a1-9fe4-179b9fddf0d4", 00:09:03.018 "is_configured": true, 00:09:03.018 "data_offset": 2048, 00:09:03.018 "data_size": 63488 00:09:03.018 } 00:09:03.018 ] 00:09:03.018 }' 00:09:03.018 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.018 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.276 [2024-11-27 11:47:29.582189] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.276 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.276 "name": "Existed_Raid", 00:09:03.276 "uuid": "4f20a841-3496-4ecd-a205-fe76b71ca96d", 00:09:03.276 "strip_size_kb": 64, 00:09:03.276 "state": "configuring", 00:09:03.276 "raid_level": "concat", 00:09:03.276 "superblock": true, 00:09:03.276 "num_base_bdevs": 3, 00:09:03.276 "num_base_bdevs_discovered": 1, 00:09:03.276 "num_base_bdevs_operational": 3, 00:09:03.276 "base_bdevs_list": [ 00:09:03.276 { 00:09:03.276 "name": "BaseBdev1", 00:09:03.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.277 "is_configured": false, 00:09:03.277 "data_offset": 0, 00:09:03.277 "data_size": 0 00:09:03.277 }, 00:09:03.277 { 00:09:03.277 "name": null, 00:09:03.277 "uuid": "8f0c0a2c-a192-49e1-ba95-1fddf090fb7b", 00:09:03.277 "is_configured": false, 00:09:03.277 "data_offset": 0, 00:09:03.277 "data_size": 63488 00:09:03.277 }, 00:09:03.277 { 00:09:03.277 "name": "BaseBdev3", 00:09:03.277 "uuid": "c5439acf-7164-48a1-9fe4-179b9fddf0d4", 00:09:03.277 "is_configured": true, 00:09:03.277 "data_offset": 2048, 00:09:03.277 "data_size": 63488 00:09:03.277 } 00:09:03.277 ] 00:09:03.277 }' 00:09:03.277 11:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.277 11:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.845 [2024-11-27 11:47:30.187368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.845 BaseBdev1 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.845 [ 00:09:03.845 { 00:09:03.845 "name": "BaseBdev1", 00:09:03.845 "aliases": [ 00:09:03.845 "f96cd1f9-049d-4bc1-877e-d66eb34ee7cd" 00:09:03.845 ], 00:09:03.845 "product_name": "Malloc disk", 00:09:03.845 "block_size": 512, 00:09:03.845 "num_blocks": 65536, 00:09:03.845 "uuid": "f96cd1f9-049d-4bc1-877e-d66eb34ee7cd", 00:09:03.845 "assigned_rate_limits": { 00:09:03.845 "rw_ios_per_sec": 0, 00:09:03.845 "rw_mbytes_per_sec": 0, 00:09:03.845 "r_mbytes_per_sec": 0, 00:09:03.845 "w_mbytes_per_sec": 0 00:09:03.845 }, 00:09:03.845 "claimed": true, 00:09:03.845 "claim_type": "exclusive_write", 00:09:03.845 "zoned": false, 00:09:03.845 "supported_io_types": { 00:09:03.845 "read": true, 00:09:03.845 "write": true, 00:09:03.845 "unmap": true, 00:09:03.845 "flush": true, 00:09:03.845 "reset": true, 00:09:03.845 "nvme_admin": false, 00:09:03.845 "nvme_io": false, 00:09:03.845 "nvme_io_md": false, 00:09:03.845 "write_zeroes": true, 00:09:03.845 "zcopy": true, 00:09:03.845 "get_zone_info": false, 00:09:03.845 "zone_management": false, 00:09:03.845 "zone_append": false, 00:09:03.845 "compare": false, 00:09:03.845 "compare_and_write": false, 00:09:03.845 "abort": true, 00:09:03.845 "seek_hole": false, 00:09:03.845 "seek_data": false, 00:09:03.845 "copy": true, 00:09:03.845 "nvme_iov_md": false 00:09:03.845 }, 00:09:03.845 "memory_domains": [ 00:09:03.845 { 00:09:03.845 "dma_device_id": "system", 00:09:03.845 "dma_device_type": 1 00:09:03.845 }, 00:09:03.845 { 00:09:03.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.845 "dma_device_type": 2 00:09:03.845 } 00:09:03.845 ], 00:09:03.845 "driver_specific": {} 00:09:03.845 } 00:09:03.845 ] 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.845 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.104 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.104 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.104 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.104 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.104 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.104 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.104 "name": "Existed_Raid", 00:09:04.104 "uuid": "4f20a841-3496-4ecd-a205-fe76b71ca96d", 00:09:04.104 "strip_size_kb": 64, 00:09:04.104 "state": "configuring", 00:09:04.104 "raid_level": "concat", 00:09:04.104 "superblock": true, 00:09:04.104 "num_base_bdevs": 3, 00:09:04.104 "num_base_bdevs_discovered": 2, 00:09:04.104 "num_base_bdevs_operational": 3, 00:09:04.104 "base_bdevs_list": [ 00:09:04.104 { 00:09:04.104 "name": "BaseBdev1", 00:09:04.104 "uuid": "f96cd1f9-049d-4bc1-877e-d66eb34ee7cd", 00:09:04.104 "is_configured": true, 00:09:04.104 "data_offset": 2048, 00:09:04.104 "data_size": 63488 00:09:04.104 }, 00:09:04.104 { 00:09:04.104 "name": null, 00:09:04.104 "uuid": "8f0c0a2c-a192-49e1-ba95-1fddf090fb7b", 00:09:04.104 "is_configured": false, 00:09:04.104 "data_offset": 0, 00:09:04.104 "data_size": 63488 00:09:04.104 }, 00:09:04.104 { 00:09:04.104 "name": "BaseBdev3", 00:09:04.104 "uuid": "c5439acf-7164-48a1-9fe4-179b9fddf0d4", 00:09:04.104 "is_configured": true, 00:09:04.104 "data_offset": 2048, 00:09:04.104 "data_size": 63488 00:09:04.104 } 00:09:04.104 ] 00:09:04.104 }' 00:09:04.104 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.104 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.362 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.362 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.362 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:04.362 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.620 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.621 [2024-11-27 11:47:30.786538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.621 "name": "Existed_Raid", 00:09:04.621 "uuid": "4f20a841-3496-4ecd-a205-fe76b71ca96d", 00:09:04.621 "strip_size_kb": 64, 00:09:04.621 "state": "configuring", 00:09:04.621 "raid_level": "concat", 00:09:04.621 "superblock": true, 00:09:04.621 "num_base_bdevs": 3, 00:09:04.621 "num_base_bdevs_discovered": 1, 00:09:04.621 "num_base_bdevs_operational": 3, 00:09:04.621 "base_bdevs_list": [ 00:09:04.621 { 00:09:04.621 "name": "BaseBdev1", 00:09:04.621 "uuid": "f96cd1f9-049d-4bc1-877e-d66eb34ee7cd", 00:09:04.621 "is_configured": true, 00:09:04.621 "data_offset": 2048, 00:09:04.621 "data_size": 63488 00:09:04.621 }, 00:09:04.621 { 00:09:04.621 "name": null, 00:09:04.621 "uuid": "8f0c0a2c-a192-49e1-ba95-1fddf090fb7b", 00:09:04.621 "is_configured": false, 00:09:04.621 "data_offset": 0, 00:09:04.621 "data_size": 63488 00:09:04.621 }, 00:09:04.621 { 00:09:04.621 "name": null, 00:09:04.621 "uuid": "c5439acf-7164-48a1-9fe4-179b9fddf0d4", 00:09:04.621 "is_configured": false, 00:09:04.621 "data_offset": 0, 00:09:04.621 "data_size": 63488 00:09:04.621 } 00:09:04.621 ] 00:09:04.621 }' 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.621 11:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.879 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.879 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.879 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.879 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:04.879 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.136 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:05.136 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.137 [2024-11-27 11:47:31.297695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.137 "name": "Existed_Raid", 00:09:05.137 "uuid": "4f20a841-3496-4ecd-a205-fe76b71ca96d", 00:09:05.137 "strip_size_kb": 64, 00:09:05.137 "state": "configuring", 00:09:05.137 "raid_level": "concat", 00:09:05.137 "superblock": true, 00:09:05.137 "num_base_bdevs": 3, 00:09:05.137 "num_base_bdevs_discovered": 2, 00:09:05.137 "num_base_bdevs_operational": 3, 00:09:05.137 "base_bdevs_list": [ 00:09:05.137 { 00:09:05.137 "name": "BaseBdev1", 00:09:05.137 "uuid": "f96cd1f9-049d-4bc1-877e-d66eb34ee7cd", 00:09:05.137 "is_configured": true, 00:09:05.137 "data_offset": 2048, 00:09:05.137 "data_size": 63488 00:09:05.137 }, 00:09:05.137 { 00:09:05.137 "name": null, 00:09:05.137 "uuid": "8f0c0a2c-a192-49e1-ba95-1fddf090fb7b", 00:09:05.137 "is_configured": false, 00:09:05.137 "data_offset": 0, 00:09:05.137 "data_size": 63488 00:09:05.137 }, 00:09:05.137 { 00:09:05.137 "name": "BaseBdev3", 00:09:05.137 "uuid": "c5439acf-7164-48a1-9fe4-179b9fddf0d4", 00:09:05.137 "is_configured": true, 00:09:05.137 "data_offset": 2048, 00:09:05.137 "data_size": 63488 00:09:05.137 } 00:09:05.137 ] 00:09:05.137 }' 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.137 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.432 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.432 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:05.432 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.432 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.432 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.726 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:05.726 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:05.726 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.726 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.726 [2024-11-27 11:47:31.820889] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:05.726 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.726 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.726 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.726 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.726 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.726 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.726 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.726 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.726 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.726 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.727 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.727 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.727 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.727 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.727 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.727 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.727 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.727 "name": "Existed_Raid", 00:09:05.727 "uuid": "4f20a841-3496-4ecd-a205-fe76b71ca96d", 00:09:05.727 "strip_size_kb": 64, 00:09:05.727 "state": "configuring", 00:09:05.727 "raid_level": "concat", 00:09:05.727 "superblock": true, 00:09:05.727 "num_base_bdevs": 3, 00:09:05.727 "num_base_bdevs_discovered": 1, 00:09:05.727 "num_base_bdevs_operational": 3, 00:09:05.727 "base_bdevs_list": [ 00:09:05.727 { 00:09:05.727 "name": null, 00:09:05.727 "uuid": "f96cd1f9-049d-4bc1-877e-d66eb34ee7cd", 00:09:05.727 "is_configured": false, 00:09:05.727 "data_offset": 0, 00:09:05.727 "data_size": 63488 00:09:05.727 }, 00:09:05.727 { 00:09:05.727 "name": null, 00:09:05.727 "uuid": "8f0c0a2c-a192-49e1-ba95-1fddf090fb7b", 00:09:05.727 "is_configured": false, 00:09:05.727 "data_offset": 0, 00:09:05.727 "data_size": 63488 00:09:05.727 }, 00:09:05.727 { 00:09:05.727 "name": "BaseBdev3", 00:09:05.727 "uuid": "c5439acf-7164-48a1-9fe4-179b9fddf0d4", 00:09:05.727 "is_configured": true, 00:09:05.727 "data_offset": 2048, 00:09:05.727 "data_size": 63488 00:09:05.727 } 00:09:05.727 ] 00:09:05.727 }' 00:09:05.727 11:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.727 11:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.297 [2024-11-27 11:47:32.474506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.297 "name": "Existed_Raid", 00:09:06.297 "uuid": "4f20a841-3496-4ecd-a205-fe76b71ca96d", 00:09:06.297 "strip_size_kb": 64, 00:09:06.297 "state": "configuring", 00:09:06.297 "raid_level": "concat", 00:09:06.297 "superblock": true, 00:09:06.297 "num_base_bdevs": 3, 00:09:06.297 "num_base_bdevs_discovered": 2, 00:09:06.297 "num_base_bdevs_operational": 3, 00:09:06.297 "base_bdevs_list": [ 00:09:06.297 { 00:09:06.297 "name": null, 00:09:06.297 "uuid": "f96cd1f9-049d-4bc1-877e-d66eb34ee7cd", 00:09:06.297 "is_configured": false, 00:09:06.297 "data_offset": 0, 00:09:06.297 "data_size": 63488 00:09:06.297 }, 00:09:06.297 { 00:09:06.297 "name": "BaseBdev2", 00:09:06.297 "uuid": "8f0c0a2c-a192-49e1-ba95-1fddf090fb7b", 00:09:06.297 "is_configured": true, 00:09:06.297 "data_offset": 2048, 00:09:06.297 "data_size": 63488 00:09:06.297 }, 00:09:06.297 { 00:09:06.297 "name": "BaseBdev3", 00:09:06.297 "uuid": "c5439acf-7164-48a1-9fe4-179b9fddf0d4", 00:09:06.297 "is_configured": true, 00:09:06.297 "data_offset": 2048, 00:09:06.297 "data_size": 63488 00:09:06.297 } 00:09:06.297 ] 00:09:06.297 }' 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.297 11:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.557 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.557 11:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.557 11:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.557 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:06.557 11:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.557 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:06.557 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.557 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:06.557 11:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.557 11:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.816 11:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.816 11:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f96cd1f9-049d-4bc1-877e-d66eb34ee7cd 00:09:06.816 11:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.816 11:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.816 [2024-11-27 11:47:33.024326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:06.816 [2024-11-27 11:47:33.024577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:06.816 [2024-11-27 11:47:33.024594] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:06.816 [2024-11-27 11:47:33.024917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:06.816 [2024-11-27 11:47:33.025094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:06.816 [2024-11-27 11:47:33.025114] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:06.816 [2024-11-27 11:47:33.025279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.816 NewBaseBdev 00:09:06.816 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.816 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:06.816 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:06.816 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.816 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:06.816 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.816 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.816 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:06.816 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.816 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.816 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.816 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:06.816 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.816 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.816 [ 00:09:06.816 { 00:09:06.816 "name": "NewBaseBdev", 00:09:06.816 "aliases": [ 00:09:06.816 "f96cd1f9-049d-4bc1-877e-d66eb34ee7cd" 00:09:06.816 ], 00:09:06.816 "product_name": "Malloc disk", 00:09:06.816 "block_size": 512, 00:09:06.816 "num_blocks": 65536, 00:09:06.816 "uuid": "f96cd1f9-049d-4bc1-877e-d66eb34ee7cd", 00:09:06.816 "assigned_rate_limits": { 00:09:06.816 "rw_ios_per_sec": 0, 00:09:06.816 "rw_mbytes_per_sec": 0, 00:09:06.817 "r_mbytes_per_sec": 0, 00:09:06.817 "w_mbytes_per_sec": 0 00:09:06.817 }, 00:09:06.817 "claimed": true, 00:09:06.817 "claim_type": "exclusive_write", 00:09:06.817 "zoned": false, 00:09:06.817 "supported_io_types": { 00:09:06.817 "read": true, 00:09:06.817 "write": true, 00:09:06.817 "unmap": true, 00:09:06.817 "flush": true, 00:09:06.817 "reset": true, 00:09:06.817 "nvme_admin": false, 00:09:06.817 "nvme_io": false, 00:09:06.817 "nvme_io_md": false, 00:09:06.817 "write_zeroes": true, 00:09:06.817 "zcopy": true, 00:09:06.817 "get_zone_info": false, 00:09:06.817 "zone_management": false, 00:09:06.817 "zone_append": false, 00:09:06.817 "compare": false, 00:09:06.817 "compare_and_write": false, 00:09:06.817 "abort": true, 00:09:06.817 "seek_hole": false, 00:09:06.817 "seek_data": false, 00:09:06.817 "copy": true, 00:09:06.817 "nvme_iov_md": false 00:09:06.817 }, 00:09:06.817 "memory_domains": [ 00:09:06.817 { 00:09:06.817 "dma_device_id": "system", 00:09:06.817 "dma_device_type": 1 00:09:06.817 }, 00:09:06.817 { 00:09:06.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.817 "dma_device_type": 2 00:09:06.817 } 00:09:06.817 ], 00:09:06.817 "driver_specific": {} 00:09:06.817 } 00:09:06.817 ] 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.817 "name": "Existed_Raid", 00:09:06.817 "uuid": "4f20a841-3496-4ecd-a205-fe76b71ca96d", 00:09:06.817 "strip_size_kb": 64, 00:09:06.817 "state": "online", 00:09:06.817 "raid_level": "concat", 00:09:06.817 "superblock": true, 00:09:06.817 "num_base_bdevs": 3, 00:09:06.817 "num_base_bdevs_discovered": 3, 00:09:06.817 "num_base_bdevs_operational": 3, 00:09:06.817 "base_bdevs_list": [ 00:09:06.817 { 00:09:06.817 "name": "NewBaseBdev", 00:09:06.817 "uuid": "f96cd1f9-049d-4bc1-877e-d66eb34ee7cd", 00:09:06.817 "is_configured": true, 00:09:06.817 "data_offset": 2048, 00:09:06.817 "data_size": 63488 00:09:06.817 }, 00:09:06.817 { 00:09:06.817 "name": "BaseBdev2", 00:09:06.817 "uuid": "8f0c0a2c-a192-49e1-ba95-1fddf090fb7b", 00:09:06.817 "is_configured": true, 00:09:06.817 "data_offset": 2048, 00:09:06.817 "data_size": 63488 00:09:06.817 }, 00:09:06.817 { 00:09:06.817 "name": "BaseBdev3", 00:09:06.817 "uuid": "c5439acf-7164-48a1-9fe4-179b9fddf0d4", 00:09:06.817 "is_configured": true, 00:09:06.817 "data_offset": 2048, 00:09:06.817 "data_size": 63488 00:09:06.817 } 00:09:06.817 ] 00:09:06.817 }' 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.817 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:07.384 [2024-11-27 11:47:33.531969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:07.384 "name": "Existed_Raid", 00:09:07.384 "aliases": [ 00:09:07.384 "4f20a841-3496-4ecd-a205-fe76b71ca96d" 00:09:07.384 ], 00:09:07.384 "product_name": "Raid Volume", 00:09:07.384 "block_size": 512, 00:09:07.384 "num_blocks": 190464, 00:09:07.384 "uuid": "4f20a841-3496-4ecd-a205-fe76b71ca96d", 00:09:07.384 "assigned_rate_limits": { 00:09:07.384 "rw_ios_per_sec": 0, 00:09:07.384 "rw_mbytes_per_sec": 0, 00:09:07.384 "r_mbytes_per_sec": 0, 00:09:07.384 "w_mbytes_per_sec": 0 00:09:07.384 }, 00:09:07.384 "claimed": false, 00:09:07.384 "zoned": false, 00:09:07.384 "supported_io_types": { 00:09:07.384 "read": true, 00:09:07.384 "write": true, 00:09:07.384 "unmap": true, 00:09:07.384 "flush": true, 00:09:07.384 "reset": true, 00:09:07.384 "nvme_admin": false, 00:09:07.384 "nvme_io": false, 00:09:07.384 "nvme_io_md": false, 00:09:07.384 "write_zeroes": true, 00:09:07.384 "zcopy": false, 00:09:07.384 "get_zone_info": false, 00:09:07.384 "zone_management": false, 00:09:07.384 "zone_append": false, 00:09:07.384 "compare": false, 00:09:07.384 "compare_and_write": false, 00:09:07.384 "abort": false, 00:09:07.384 "seek_hole": false, 00:09:07.384 "seek_data": false, 00:09:07.384 "copy": false, 00:09:07.384 "nvme_iov_md": false 00:09:07.384 }, 00:09:07.384 "memory_domains": [ 00:09:07.384 { 00:09:07.384 "dma_device_id": "system", 00:09:07.384 "dma_device_type": 1 00:09:07.384 }, 00:09:07.384 { 00:09:07.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.384 "dma_device_type": 2 00:09:07.384 }, 00:09:07.384 { 00:09:07.384 "dma_device_id": "system", 00:09:07.384 "dma_device_type": 1 00:09:07.384 }, 00:09:07.384 { 00:09:07.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.384 "dma_device_type": 2 00:09:07.384 }, 00:09:07.384 { 00:09:07.384 "dma_device_id": "system", 00:09:07.384 "dma_device_type": 1 00:09:07.384 }, 00:09:07.384 { 00:09:07.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.384 "dma_device_type": 2 00:09:07.384 } 00:09:07.384 ], 00:09:07.384 "driver_specific": { 00:09:07.384 "raid": { 00:09:07.384 "uuid": "4f20a841-3496-4ecd-a205-fe76b71ca96d", 00:09:07.384 "strip_size_kb": 64, 00:09:07.384 "state": "online", 00:09:07.384 "raid_level": "concat", 00:09:07.384 "superblock": true, 00:09:07.384 "num_base_bdevs": 3, 00:09:07.384 "num_base_bdevs_discovered": 3, 00:09:07.384 "num_base_bdevs_operational": 3, 00:09:07.384 "base_bdevs_list": [ 00:09:07.384 { 00:09:07.384 "name": "NewBaseBdev", 00:09:07.384 "uuid": "f96cd1f9-049d-4bc1-877e-d66eb34ee7cd", 00:09:07.384 "is_configured": true, 00:09:07.384 "data_offset": 2048, 00:09:07.384 "data_size": 63488 00:09:07.384 }, 00:09:07.384 { 00:09:07.384 "name": "BaseBdev2", 00:09:07.384 "uuid": "8f0c0a2c-a192-49e1-ba95-1fddf090fb7b", 00:09:07.384 "is_configured": true, 00:09:07.384 "data_offset": 2048, 00:09:07.384 "data_size": 63488 00:09:07.384 }, 00:09:07.384 { 00:09:07.384 "name": "BaseBdev3", 00:09:07.384 "uuid": "c5439acf-7164-48a1-9fe4-179b9fddf0d4", 00:09:07.384 "is_configured": true, 00:09:07.384 "data_offset": 2048, 00:09:07.384 "data_size": 63488 00:09:07.384 } 00:09:07.384 ] 00:09:07.384 } 00:09:07.384 } 00:09:07.384 }' 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:07.384 BaseBdev2 00:09:07.384 BaseBdev3' 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.384 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.385 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.385 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.385 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.385 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.385 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:07.385 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.385 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.385 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.385 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.385 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.385 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.385 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.385 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.385 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:07.385 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.385 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.385 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.644 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.644 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.644 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.644 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.644 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.644 [2024-11-27 11:47:33.779176] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.644 [2024-11-27 11:47:33.779212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.644 [2024-11-27 11:47:33.779307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.644 [2024-11-27 11:47:33.779365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.644 [2024-11-27 11:47:33.779378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:07.644 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.644 11:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66204 00:09:07.644 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66204 ']' 00:09:07.644 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66204 00:09:07.644 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:07.644 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.644 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66204 00:09:07.644 killing process with pid 66204 00:09:07.644 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.644 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.644 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66204' 00:09:07.644 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66204 00:09:07.644 [2024-11-27 11:47:33.822926] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.644 11:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66204 00:09:07.903 [2024-11-27 11:47:34.163263] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:09.280 ************************************ 00:09:09.280 END TEST raid_state_function_test_sb 00:09:09.280 ************************************ 00:09:09.280 11:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:09.280 00:09:09.280 real 0m11.282s 00:09:09.280 user 0m17.909s 00:09:09.280 sys 0m1.884s 00:09:09.280 11:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.280 11:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.280 11:47:35 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:09.280 11:47:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:09.280 11:47:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.280 11:47:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:09.280 ************************************ 00:09:09.280 START TEST raid_superblock_test 00:09:09.280 ************************************ 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66830 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66830 00:09:09.280 11:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66830 ']' 00:09:09.281 11:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.281 11:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.281 11:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.281 11:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.281 11:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.281 [2024-11-27 11:47:35.574772] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:09:09.281 [2024-11-27 11:47:35.574983] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66830 ] 00:09:09.539 [2024-11-27 11:47:35.753957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.539 [2024-11-27 11:47:35.878222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.799 [2024-11-27 11:47:36.098779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.799 [2024-11-27 11:47:36.098860] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.370 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.370 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:10.370 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:10.370 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.370 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:10.370 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:10.370 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:10.370 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:10.370 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:10.370 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:10.370 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:10.370 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.370 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.370 malloc1 00:09:10.370 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.370 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:10.370 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.370 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.370 [2024-11-27 11:47:36.519046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:10.370 [2024-11-27 11:47:36.519189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.370 [2024-11-27 11:47:36.519236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:10.370 [2024-11-27 11:47:36.519270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.370 [2024-11-27 11:47:36.521787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.370 [2024-11-27 11:47:36.521894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:10.370 pt1 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.371 malloc2 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.371 [2024-11-27 11:47:36.578875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:10.371 [2024-11-27 11:47:36.578992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.371 [2024-11-27 11:47:36.579025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:10.371 [2024-11-27 11:47:36.579035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.371 [2024-11-27 11:47:36.581373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.371 [2024-11-27 11:47:36.581427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:10.371 pt2 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.371 malloc3 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.371 [2024-11-27 11:47:36.652265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:10.371 [2024-11-27 11:47:36.652390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.371 [2024-11-27 11:47:36.652445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:10.371 [2024-11-27 11:47:36.652478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.371 [2024-11-27 11:47:36.654787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.371 [2024-11-27 11:47:36.654891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:10.371 pt3 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.371 [2024-11-27 11:47:36.664297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:10.371 [2024-11-27 11:47:36.666333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:10.371 [2024-11-27 11:47:36.666488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:10.371 [2024-11-27 11:47:36.666706] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:10.371 [2024-11-27 11:47:36.666724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.371 [2024-11-27 11:47:36.667086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:10.371 [2024-11-27 11:47:36.667281] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:10.371 [2024-11-27 11:47:36.667297] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:10.371 [2024-11-27 11:47:36.667506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.371 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.371 "name": "raid_bdev1", 00:09:10.371 "uuid": "b003bbd7-d1a2-40d4-b201-185c7f0fe498", 00:09:10.371 "strip_size_kb": 64, 00:09:10.371 "state": "online", 00:09:10.371 "raid_level": "concat", 00:09:10.371 "superblock": true, 00:09:10.371 "num_base_bdevs": 3, 00:09:10.371 "num_base_bdevs_discovered": 3, 00:09:10.371 "num_base_bdevs_operational": 3, 00:09:10.371 "base_bdevs_list": [ 00:09:10.371 { 00:09:10.371 "name": "pt1", 00:09:10.371 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.371 "is_configured": true, 00:09:10.371 "data_offset": 2048, 00:09:10.371 "data_size": 63488 00:09:10.372 }, 00:09:10.372 { 00:09:10.372 "name": "pt2", 00:09:10.372 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.372 "is_configured": true, 00:09:10.372 "data_offset": 2048, 00:09:10.372 "data_size": 63488 00:09:10.372 }, 00:09:10.372 { 00:09:10.372 "name": "pt3", 00:09:10.372 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:10.372 "is_configured": true, 00:09:10.372 "data_offset": 2048, 00:09:10.372 "data_size": 63488 00:09:10.372 } 00:09:10.372 ] 00:09:10.372 }' 00:09:10.372 11:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.372 11:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.940 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:10.940 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:10.940 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:10.940 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:10.940 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:10.940 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:10.940 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:10.940 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.940 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.940 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:10.940 [2024-11-27 11:47:37.159725] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.940 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.940 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:10.940 "name": "raid_bdev1", 00:09:10.940 "aliases": [ 00:09:10.940 "b003bbd7-d1a2-40d4-b201-185c7f0fe498" 00:09:10.940 ], 00:09:10.940 "product_name": "Raid Volume", 00:09:10.940 "block_size": 512, 00:09:10.940 "num_blocks": 190464, 00:09:10.940 "uuid": "b003bbd7-d1a2-40d4-b201-185c7f0fe498", 00:09:10.940 "assigned_rate_limits": { 00:09:10.941 "rw_ios_per_sec": 0, 00:09:10.941 "rw_mbytes_per_sec": 0, 00:09:10.941 "r_mbytes_per_sec": 0, 00:09:10.941 "w_mbytes_per_sec": 0 00:09:10.941 }, 00:09:10.941 "claimed": false, 00:09:10.941 "zoned": false, 00:09:10.941 "supported_io_types": { 00:09:10.941 "read": true, 00:09:10.941 "write": true, 00:09:10.941 "unmap": true, 00:09:10.941 "flush": true, 00:09:10.941 "reset": true, 00:09:10.941 "nvme_admin": false, 00:09:10.941 "nvme_io": false, 00:09:10.941 "nvme_io_md": false, 00:09:10.941 "write_zeroes": true, 00:09:10.941 "zcopy": false, 00:09:10.941 "get_zone_info": false, 00:09:10.941 "zone_management": false, 00:09:10.941 "zone_append": false, 00:09:10.941 "compare": false, 00:09:10.941 "compare_and_write": false, 00:09:10.941 "abort": false, 00:09:10.941 "seek_hole": false, 00:09:10.941 "seek_data": false, 00:09:10.941 "copy": false, 00:09:10.941 "nvme_iov_md": false 00:09:10.941 }, 00:09:10.941 "memory_domains": [ 00:09:10.941 { 00:09:10.941 "dma_device_id": "system", 00:09:10.941 "dma_device_type": 1 00:09:10.941 }, 00:09:10.941 { 00:09:10.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.941 "dma_device_type": 2 00:09:10.941 }, 00:09:10.941 { 00:09:10.941 "dma_device_id": "system", 00:09:10.941 "dma_device_type": 1 00:09:10.941 }, 00:09:10.941 { 00:09:10.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.941 "dma_device_type": 2 00:09:10.941 }, 00:09:10.941 { 00:09:10.941 "dma_device_id": "system", 00:09:10.941 "dma_device_type": 1 00:09:10.941 }, 00:09:10.941 { 00:09:10.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.941 "dma_device_type": 2 00:09:10.941 } 00:09:10.941 ], 00:09:10.941 "driver_specific": { 00:09:10.941 "raid": { 00:09:10.941 "uuid": "b003bbd7-d1a2-40d4-b201-185c7f0fe498", 00:09:10.941 "strip_size_kb": 64, 00:09:10.941 "state": "online", 00:09:10.941 "raid_level": "concat", 00:09:10.941 "superblock": true, 00:09:10.941 "num_base_bdevs": 3, 00:09:10.941 "num_base_bdevs_discovered": 3, 00:09:10.941 "num_base_bdevs_operational": 3, 00:09:10.941 "base_bdevs_list": [ 00:09:10.941 { 00:09:10.941 "name": "pt1", 00:09:10.941 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.941 "is_configured": true, 00:09:10.941 "data_offset": 2048, 00:09:10.941 "data_size": 63488 00:09:10.941 }, 00:09:10.941 { 00:09:10.941 "name": "pt2", 00:09:10.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.941 "is_configured": true, 00:09:10.941 "data_offset": 2048, 00:09:10.941 "data_size": 63488 00:09:10.941 }, 00:09:10.941 { 00:09:10.941 "name": "pt3", 00:09:10.941 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:10.941 "is_configured": true, 00:09:10.941 "data_offset": 2048, 00:09:10.941 "data_size": 63488 00:09:10.941 } 00:09:10.941 ] 00:09:10.941 } 00:09:10.941 } 00:09:10.941 }' 00:09:10.941 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:10.941 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:10.941 pt2 00:09:10.941 pt3' 00:09:10.941 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.941 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:10.941 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.941 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.941 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:10.941 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.941 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.941 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:11.201 [2024-11-27 11:47:37.411275] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b003bbd7-d1a2-40d4-b201-185c7f0fe498 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b003bbd7-d1a2-40d4-b201-185c7f0fe498 ']' 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.201 [2024-11-27 11:47:37.442920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.201 [2024-11-27 11:47:37.442953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.201 [2024-11-27 11:47:37.443052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.201 [2024-11-27 11:47:37.443119] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.201 [2024-11-27 11:47:37.443129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:11.201 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:11.202 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.202 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.202 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.202 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:11.202 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:11.202 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.202 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.202 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.202 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:11.202 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:11.202 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.202 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.202 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.202 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:11.202 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:11.202 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.202 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.202 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.461 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:11.461 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:11.461 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:11.461 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:11.461 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:11.461 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.461 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:11.461 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.461 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:11.461 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.461 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.461 [2024-11-27 11:47:37.594731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:11.461 [2024-11-27 11:47:37.596891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:11.461 [2024-11-27 11:47:37.597001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:11.461 [2024-11-27 11:47:37.597100] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:11.461 [2024-11-27 11:47:37.597208] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:11.461 [2024-11-27 11:47:37.597271] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:11.461 [2024-11-27 11:47:37.597339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.461 [2024-11-27 11:47:37.597372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:11.461 request: 00:09:11.461 { 00:09:11.461 "name": "raid_bdev1", 00:09:11.461 "raid_level": "concat", 00:09:11.461 "base_bdevs": [ 00:09:11.461 "malloc1", 00:09:11.461 "malloc2", 00:09:11.461 "malloc3" 00:09:11.461 ], 00:09:11.461 "strip_size_kb": 64, 00:09:11.461 "superblock": false, 00:09:11.461 "method": "bdev_raid_create", 00:09:11.461 "req_id": 1 00:09:11.461 } 00:09:11.461 Got JSON-RPC error response 00:09:11.461 response: 00:09:11.461 { 00:09:11.461 "code": -17, 00:09:11.461 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:11.461 } 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.462 [2024-11-27 11:47:37.646572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:11.462 [2024-11-27 11:47:37.646692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.462 [2024-11-27 11:47:37.646719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:11.462 [2024-11-27 11:47:37.646727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.462 [2024-11-27 11:47:37.649013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.462 [2024-11-27 11:47:37.649051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:11.462 [2024-11-27 11:47:37.649147] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:11.462 [2024-11-27 11:47:37.649208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:11.462 pt1 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.462 "name": "raid_bdev1", 00:09:11.462 "uuid": "b003bbd7-d1a2-40d4-b201-185c7f0fe498", 00:09:11.462 "strip_size_kb": 64, 00:09:11.462 "state": "configuring", 00:09:11.462 "raid_level": "concat", 00:09:11.462 "superblock": true, 00:09:11.462 "num_base_bdevs": 3, 00:09:11.462 "num_base_bdevs_discovered": 1, 00:09:11.462 "num_base_bdevs_operational": 3, 00:09:11.462 "base_bdevs_list": [ 00:09:11.462 { 00:09:11.462 "name": "pt1", 00:09:11.462 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:11.462 "is_configured": true, 00:09:11.462 "data_offset": 2048, 00:09:11.462 "data_size": 63488 00:09:11.462 }, 00:09:11.462 { 00:09:11.462 "name": null, 00:09:11.462 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:11.462 "is_configured": false, 00:09:11.462 "data_offset": 2048, 00:09:11.462 "data_size": 63488 00:09:11.462 }, 00:09:11.462 { 00:09:11.462 "name": null, 00:09:11.462 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:11.462 "is_configured": false, 00:09:11.462 "data_offset": 2048, 00:09:11.462 "data_size": 63488 00:09:11.462 } 00:09:11.462 ] 00:09:11.462 }' 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.462 11:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.721 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:11.721 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:11.721 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.721 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.981 [2024-11-27 11:47:38.105781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:11.981 [2024-11-27 11:47:38.105954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.981 [2024-11-27 11:47:38.106017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:11.981 [2024-11-27 11:47:38.106058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.981 [2024-11-27 11:47:38.106574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.981 [2024-11-27 11:47:38.106650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:11.981 [2024-11-27 11:47:38.106770] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:11.981 [2024-11-27 11:47:38.106829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:11.981 pt2 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.981 [2024-11-27 11:47:38.117779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.981 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.981 "name": "raid_bdev1", 00:09:11.981 "uuid": "b003bbd7-d1a2-40d4-b201-185c7f0fe498", 00:09:11.981 "strip_size_kb": 64, 00:09:11.981 "state": "configuring", 00:09:11.981 "raid_level": "concat", 00:09:11.981 "superblock": true, 00:09:11.981 "num_base_bdevs": 3, 00:09:11.981 "num_base_bdevs_discovered": 1, 00:09:11.981 "num_base_bdevs_operational": 3, 00:09:11.981 "base_bdevs_list": [ 00:09:11.981 { 00:09:11.981 "name": "pt1", 00:09:11.981 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:11.981 "is_configured": true, 00:09:11.981 "data_offset": 2048, 00:09:11.981 "data_size": 63488 00:09:11.981 }, 00:09:11.981 { 00:09:11.981 "name": null, 00:09:11.981 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:11.981 "is_configured": false, 00:09:11.981 "data_offset": 0, 00:09:11.981 "data_size": 63488 00:09:11.981 }, 00:09:11.981 { 00:09:11.981 "name": null, 00:09:11.981 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:11.981 "is_configured": false, 00:09:11.981 "data_offset": 2048, 00:09:11.981 "data_size": 63488 00:09:11.982 } 00:09:11.982 ] 00:09:11.982 }' 00:09:11.982 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.982 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.242 [2024-11-27 11:47:38.592966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:12.242 [2024-11-27 11:47:38.593108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.242 [2024-11-27 11:47:38.593170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:12.242 [2024-11-27 11:47:38.593206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.242 [2024-11-27 11:47:38.593759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.242 [2024-11-27 11:47:38.593828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:12.242 [2024-11-27 11:47:38.593939] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:12.242 [2024-11-27 11:47:38.593969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:12.242 pt2 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.242 [2024-11-27 11:47:38.604922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:12.242 [2024-11-27 11:47:38.604979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.242 [2024-11-27 11:47:38.604995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:12.242 [2024-11-27 11:47:38.605007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.242 [2024-11-27 11:47:38.605451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.242 [2024-11-27 11:47:38.605473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:12.242 [2024-11-27 11:47:38.605548] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:12.242 [2024-11-27 11:47:38.605570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:12.242 [2024-11-27 11:47:38.605694] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:12.242 [2024-11-27 11:47:38.605705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:12.242 [2024-11-27 11:47:38.605985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:12.242 [2024-11-27 11:47:38.606148] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:12.242 [2024-11-27 11:47:38.606164] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:12.242 [2024-11-27 11:47:38.606317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.242 pt3 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.242 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.501 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.502 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.502 "name": "raid_bdev1", 00:09:12.502 "uuid": "b003bbd7-d1a2-40d4-b201-185c7f0fe498", 00:09:12.502 "strip_size_kb": 64, 00:09:12.502 "state": "online", 00:09:12.502 "raid_level": "concat", 00:09:12.502 "superblock": true, 00:09:12.502 "num_base_bdevs": 3, 00:09:12.502 "num_base_bdevs_discovered": 3, 00:09:12.502 "num_base_bdevs_operational": 3, 00:09:12.502 "base_bdevs_list": [ 00:09:12.502 { 00:09:12.502 "name": "pt1", 00:09:12.502 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:12.502 "is_configured": true, 00:09:12.502 "data_offset": 2048, 00:09:12.502 "data_size": 63488 00:09:12.502 }, 00:09:12.502 { 00:09:12.502 "name": "pt2", 00:09:12.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.502 "is_configured": true, 00:09:12.502 "data_offset": 2048, 00:09:12.502 "data_size": 63488 00:09:12.502 }, 00:09:12.502 { 00:09:12.502 "name": "pt3", 00:09:12.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.502 "is_configured": true, 00:09:12.502 "data_offset": 2048, 00:09:12.502 "data_size": 63488 00:09:12.502 } 00:09:12.502 ] 00:09:12.502 }' 00:09:12.502 11:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.502 11:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.761 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:12.761 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:12.761 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:12.761 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:12.761 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:12.761 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:12.761 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:12.761 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:12.761 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.761 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.761 [2024-11-27 11:47:39.056517] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.761 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.761 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:12.761 "name": "raid_bdev1", 00:09:12.761 "aliases": [ 00:09:12.761 "b003bbd7-d1a2-40d4-b201-185c7f0fe498" 00:09:12.761 ], 00:09:12.761 "product_name": "Raid Volume", 00:09:12.761 "block_size": 512, 00:09:12.761 "num_blocks": 190464, 00:09:12.761 "uuid": "b003bbd7-d1a2-40d4-b201-185c7f0fe498", 00:09:12.761 "assigned_rate_limits": { 00:09:12.761 "rw_ios_per_sec": 0, 00:09:12.761 "rw_mbytes_per_sec": 0, 00:09:12.761 "r_mbytes_per_sec": 0, 00:09:12.761 "w_mbytes_per_sec": 0 00:09:12.761 }, 00:09:12.761 "claimed": false, 00:09:12.761 "zoned": false, 00:09:12.761 "supported_io_types": { 00:09:12.761 "read": true, 00:09:12.761 "write": true, 00:09:12.761 "unmap": true, 00:09:12.761 "flush": true, 00:09:12.761 "reset": true, 00:09:12.761 "nvme_admin": false, 00:09:12.761 "nvme_io": false, 00:09:12.761 "nvme_io_md": false, 00:09:12.761 "write_zeroes": true, 00:09:12.761 "zcopy": false, 00:09:12.761 "get_zone_info": false, 00:09:12.761 "zone_management": false, 00:09:12.761 "zone_append": false, 00:09:12.761 "compare": false, 00:09:12.761 "compare_and_write": false, 00:09:12.761 "abort": false, 00:09:12.761 "seek_hole": false, 00:09:12.761 "seek_data": false, 00:09:12.761 "copy": false, 00:09:12.761 "nvme_iov_md": false 00:09:12.761 }, 00:09:12.761 "memory_domains": [ 00:09:12.761 { 00:09:12.761 "dma_device_id": "system", 00:09:12.761 "dma_device_type": 1 00:09:12.761 }, 00:09:12.761 { 00:09:12.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.761 "dma_device_type": 2 00:09:12.761 }, 00:09:12.761 { 00:09:12.761 "dma_device_id": "system", 00:09:12.761 "dma_device_type": 1 00:09:12.761 }, 00:09:12.761 { 00:09:12.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.761 "dma_device_type": 2 00:09:12.761 }, 00:09:12.761 { 00:09:12.761 "dma_device_id": "system", 00:09:12.761 "dma_device_type": 1 00:09:12.761 }, 00:09:12.761 { 00:09:12.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.761 "dma_device_type": 2 00:09:12.761 } 00:09:12.761 ], 00:09:12.761 "driver_specific": { 00:09:12.761 "raid": { 00:09:12.761 "uuid": "b003bbd7-d1a2-40d4-b201-185c7f0fe498", 00:09:12.761 "strip_size_kb": 64, 00:09:12.761 "state": "online", 00:09:12.761 "raid_level": "concat", 00:09:12.761 "superblock": true, 00:09:12.761 "num_base_bdevs": 3, 00:09:12.761 "num_base_bdevs_discovered": 3, 00:09:12.761 "num_base_bdevs_operational": 3, 00:09:12.761 "base_bdevs_list": [ 00:09:12.761 { 00:09:12.761 "name": "pt1", 00:09:12.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:12.761 "is_configured": true, 00:09:12.761 "data_offset": 2048, 00:09:12.761 "data_size": 63488 00:09:12.761 }, 00:09:12.761 { 00:09:12.761 "name": "pt2", 00:09:12.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.761 "is_configured": true, 00:09:12.761 "data_offset": 2048, 00:09:12.761 "data_size": 63488 00:09:12.761 }, 00:09:12.761 { 00:09:12.761 "name": "pt3", 00:09:12.761 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.761 "is_configured": true, 00:09:12.761 "data_offset": 2048, 00:09:12.761 "data_size": 63488 00:09:12.761 } 00:09:12.761 ] 00:09:12.761 } 00:09:12.761 } 00:09:12.761 }' 00:09:12.761 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:13.019 pt2 00:09:13.019 pt3' 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:13.019 [2024-11-27 11:47:39.328084] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b003bbd7-d1a2-40d4-b201-185c7f0fe498 '!=' b003bbd7-d1a2-40d4-b201-185c7f0fe498 ']' 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66830 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66830 ']' 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66830 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.019 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66830 00:09:13.278 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.278 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.278 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66830' 00:09:13.278 killing process with pid 66830 00:09:13.278 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66830 00:09:13.278 [2024-11-27 11:47:39.403818] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.278 11:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66830 00:09:13.278 [2024-11-27 11:47:39.404048] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.278 [2024-11-27 11:47:39.404123] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.278 [2024-11-27 11:47:39.404136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:13.537 [2024-11-27 11:47:39.718502] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:14.916 11:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:14.916 00:09:14.916 real 0m5.414s 00:09:14.916 user 0m7.743s 00:09:14.916 sys 0m0.892s 00:09:14.916 11:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.916 11:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.916 ************************************ 00:09:14.916 END TEST raid_superblock_test 00:09:14.916 ************************************ 00:09:14.916 11:47:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:14.916 11:47:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:14.916 11:47:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.916 11:47:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:14.916 ************************************ 00:09:14.916 START TEST raid_read_error_test 00:09:14.916 ************************************ 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wi7fRQ0cj1 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67089 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67089 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67089 ']' 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.916 11:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.916 [2024-11-27 11:47:41.068429] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:09:14.916 [2024-11-27 11:47:41.068555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67089 ] 00:09:14.916 [2024-11-27 11:47:41.243473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.175 [2024-11-27 11:47:41.363680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.434 [2024-11-27 11:47:41.576782] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.434 [2024-11-27 11:47:41.576849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.693 11:47:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.693 11:47:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:15.693 11:47:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:15.693 11:47:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:15.693 11:47:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.693 11:47:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.693 BaseBdev1_malloc 00:09:15.693 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.693 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:15.693 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.693 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.693 true 00:09:15.693 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.693 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:15.693 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.693 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.693 [2024-11-27 11:47:42.024999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:15.693 [2024-11-27 11:47:42.025059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.693 [2024-11-27 11:47:42.025080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:15.693 [2024-11-27 11:47:42.025091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.693 [2024-11-27 11:47:42.027225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.693 [2024-11-27 11:47:42.027263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:15.693 BaseBdev1 00:09:15.693 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.693 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:15.693 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:15.693 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.693 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.693 BaseBdev2_malloc 00:09:15.693 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.693 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:15.693 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.693 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.952 true 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.952 [2024-11-27 11:47:42.089733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:15.952 [2024-11-27 11:47:42.089789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.952 [2024-11-27 11:47:42.089807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:15.952 [2024-11-27 11:47:42.089818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.952 [2024-11-27 11:47:42.092040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.952 [2024-11-27 11:47:42.092078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:15.952 BaseBdev2 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.952 BaseBdev3_malloc 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.952 true 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.952 [2024-11-27 11:47:42.171190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:15.952 [2024-11-27 11:47:42.171249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.952 [2024-11-27 11:47:42.171270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:15.952 [2024-11-27 11:47:42.171281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.952 [2024-11-27 11:47:42.173628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.952 [2024-11-27 11:47:42.173665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:15.952 BaseBdev3 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.952 [2024-11-27 11:47:42.183239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.952 [2024-11-27 11:47:42.185272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.952 [2024-11-27 11:47:42.185362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.952 [2024-11-27 11:47:42.185593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:15.952 [2024-11-27 11:47:42.185615] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:15.952 [2024-11-27 11:47:42.185925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:15.952 [2024-11-27 11:47:42.186104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:15.952 [2024-11-27 11:47:42.186125] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:15.952 [2024-11-27 11:47:42.186299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.952 "name": "raid_bdev1", 00:09:15.952 "uuid": "5b5cd4d4-5263-4dab-a36a-b2e97610cacc", 00:09:15.952 "strip_size_kb": 64, 00:09:15.952 "state": "online", 00:09:15.952 "raid_level": "concat", 00:09:15.952 "superblock": true, 00:09:15.952 "num_base_bdevs": 3, 00:09:15.952 "num_base_bdevs_discovered": 3, 00:09:15.952 "num_base_bdevs_operational": 3, 00:09:15.952 "base_bdevs_list": [ 00:09:15.952 { 00:09:15.952 "name": "BaseBdev1", 00:09:15.952 "uuid": "46cc010d-9ebd-5a07-a1ba-fa5555aa2e99", 00:09:15.952 "is_configured": true, 00:09:15.952 "data_offset": 2048, 00:09:15.952 "data_size": 63488 00:09:15.952 }, 00:09:15.952 { 00:09:15.952 "name": "BaseBdev2", 00:09:15.952 "uuid": "a301f592-20fd-5e12-ba01-2437e9a1631e", 00:09:15.952 "is_configured": true, 00:09:15.952 "data_offset": 2048, 00:09:15.952 "data_size": 63488 00:09:15.952 }, 00:09:15.952 { 00:09:15.952 "name": "BaseBdev3", 00:09:15.952 "uuid": "18137c4e-5afd-5d9d-899f-4a6d50d2cca0", 00:09:15.952 "is_configured": true, 00:09:15.952 "data_offset": 2048, 00:09:15.952 "data_size": 63488 00:09:15.952 } 00:09:15.952 ] 00:09:15.952 }' 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.952 11:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.520 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:16.520 11:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:16.520 [2024-11-27 11:47:42.739770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.457 11:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.457 "name": "raid_bdev1", 00:09:17.458 "uuid": "5b5cd4d4-5263-4dab-a36a-b2e97610cacc", 00:09:17.458 "strip_size_kb": 64, 00:09:17.458 "state": "online", 00:09:17.458 "raid_level": "concat", 00:09:17.458 "superblock": true, 00:09:17.458 "num_base_bdevs": 3, 00:09:17.458 "num_base_bdevs_discovered": 3, 00:09:17.458 "num_base_bdevs_operational": 3, 00:09:17.458 "base_bdevs_list": [ 00:09:17.458 { 00:09:17.458 "name": "BaseBdev1", 00:09:17.458 "uuid": "46cc010d-9ebd-5a07-a1ba-fa5555aa2e99", 00:09:17.458 "is_configured": true, 00:09:17.458 "data_offset": 2048, 00:09:17.458 "data_size": 63488 00:09:17.458 }, 00:09:17.458 { 00:09:17.458 "name": "BaseBdev2", 00:09:17.458 "uuid": "a301f592-20fd-5e12-ba01-2437e9a1631e", 00:09:17.458 "is_configured": true, 00:09:17.458 "data_offset": 2048, 00:09:17.458 "data_size": 63488 00:09:17.458 }, 00:09:17.458 { 00:09:17.458 "name": "BaseBdev3", 00:09:17.458 "uuid": "18137c4e-5afd-5d9d-899f-4a6d50d2cca0", 00:09:17.458 "is_configured": true, 00:09:17.458 "data_offset": 2048, 00:09:17.458 "data_size": 63488 00:09:17.458 } 00:09:17.458 ] 00:09:17.458 }' 00:09:17.458 11:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.458 11:47:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.716 11:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:17.716 11:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.716 11:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.716 [2024-11-27 11:47:44.096179] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:17.717 [2024-11-27 11:47:44.096223] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.717 [2024-11-27 11:47:44.099190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.717 [2024-11-27 11:47:44.099240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.717 [2024-11-27 11:47:44.099280] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.717 [2024-11-27 11:47:44.099290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:17.975 { 00:09:17.975 "results": [ 00:09:17.975 { 00:09:17.975 "job": "raid_bdev1", 00:09:17.975 "core_mask": "0x1", 00:09:17.975 "workload": "randrw", 00:09:17.975 "percentage": 50, 00:09:17.975 "status": "finished", 00:09:17.975 "queue_depth": 1, 00:09:17.975 "io_size": 131072, 00:09:17.975 "runtime": 1.357197, 00:09:17.975 "iops": 14436.371433181772, 00:09:17.975 "mibps": 1804.5464291477215, 00:09:17.975 "io_failed": 1, 00:09:17.975 "io_timeout": 0, 00:09:17.975 "avg_latency_us": 95.90900271137275, 00:09:17.975 "min_latency_us": 27.053275109170304, 00:09:17.975 "max_latency_us": 1538.235807860262 00:09:17.975 } 00:09:17.975 ], 00:09:17.975 "core_count": 1 00:09:17.975 } 00:09:17.975 11:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.975 11:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67089 00:09:17.975 11:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67089 ']' 00:09:17.975 11:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67089 00:09:17.975 11:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:17.975 11:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.975 11:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67089 00:09:17.975 killing process with pid 67089 00:09:17.975 11:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.975 11:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.975 11:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67089' 00:09:17.975 11:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67089 00:09:17.975 [2024-11-27 11:47:44.138739] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.975 11:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67089 00:09:18.234 [2024-11-27 11:47:44.383090] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:19.612 11:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:19.612 11:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wi7fRQ0cj1 00:09:19.612 11:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:19.612 11:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:19.613 11:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:19.613 11:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:19.613 11:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:19.613 11:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:19.613 ************************************ 00:09:19.613 END TEST raid_read_error_test 00:09:19.613 ************************************ 00:09:19.613 00:09:19.613 real 0m4.648s 00:09:19.613 user 0m5.578s 00:09:19.613 sys 0m0.553s 00:09:19.613 11:47:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.613 11:47:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.613 11:47:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:19.613 11:47:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:19.613 11:47:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.613 11:47:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:19.613 ************************************ 00:09:19.613 START TEST raid_write_error_test 00:09:19.613 ************************************ 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4RxKoiBJFg 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67234 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67234 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67234 ']' 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.613 11:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.613 [2024-11-27 11:47:45.785357] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:09:19.613 [2024-11-27 11:47:45.785567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67234 ] 00:09:19.613 [2024-11-27 11:47:45.961126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.872 [2024-11-27 11:47:46.078114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.131 [2024-11-27 11:47:46.280890] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.131 [2024-11-27 11:47:46.281051] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.390 BaseBdev1_malloc 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.390 true 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.390 [2024-11-27 11:47:46.761996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:20.390 [2024-11-27 11:47:46.762054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.390 [2024-11-27 11:47:46.762075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:20.390 [2024-11-27 11:47:46.762085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.390 [2024-11-27 11:47:46.764250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.390 [2024-11-27 11:47:46.764294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:20.390 BaseBdev1 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.390 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.649 BaseBdev2_malloc 00:09:20.649 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.650 true 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.650 [2024-11-27 11:47:46.829454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:20.650 [2024-11-27 11:47:46.829513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.650 [2024-11-27 11:47:46.829547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:20.650 [2024-11-27 11:47:46.829557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.650 [2024-11-27 11:47:46.832036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.650 [2024-11-27 11:47:46.832094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:20.650 BaseBdev2 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.650 BaseBdev3_malloc 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.650 true 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.650 [2024-11-27 11:47:46.911736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:20.650 [2024-11-27 11:47:46.911791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.650 [2024-11-27 11:47:46.911810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:20.650 [2024-11-27 11:47:46.911821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.650 [2024-11-27 11:47:46.914027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.650 [2024-11-27 11:47:46.914063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:20.650 BaseBdev3 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.650 [2024-11-27 11:47:46.923798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.650 [2024-11-27 11:47:46.925708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.650 [2024-11-27 11:47:46.925779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:20.650 [2024-11-27 11:47:46.925994] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:20.650 [2024-11-27 11:47:46.926007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:20.650 [2024-11-27 11:47:46.926271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:20.650 [2024-11-27 11:47:46.926439] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:20.650 [2024-11-27 11:47:46.926454] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:20.650 [2024-11-27 11:47:46.926612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.650 "name": "raid_bdev1", 00:09:20.650 "uuid": "eb39e96a-e22d-4557-9457-26767b2df6f6", 00:09:20.650 "strip_size_kb": 64, 00:09:20.650 "state": "online", 00:09:20.650 "raid_level": "concat", 00:09:20.650 "superblock": true, 00:09:20.650 "num_base_bdevs": 3, 00:09:20.650 "num_base_bdevs_discovered": 3, 00:09:20.650 "num_base_bdevs_operational": 3, 00:09:20.650 "base_bdevs_list": [ 00:09:20.650 { 00:09:20.650 "name": "BaseBdev1", 00:09:20.650 "uuid": "21fd7a2e-4048-5677-8006-bc8da1cca509", 00:09:20.650 "is_configured": true, 00:09:20.650 "data_offset": 2048, 00:09:20.650 "data_size": 63488 00:09:20.650 }, 00:09:20.650 { 00:09:20.650 "name": "BaseBdev2", 00:09:20.650 "uuid": "2cda0dc8-63cd-5a46-ac2a-151644acc923", 00:09:20.650 "is_configured": true, 00:09:20.650 "data_offset": 2048, 00:09:20.650 "data_size": 63488 00:09:20.650 }, 00:09:20.650 { 00:09:20.650 "name": "BaseBdev3", 00:09:20.650 "uuid": "cc29da32-907c-5f29-920c-e7d9dcedb851", 00:09:20.650 "is_configured": true, 00:09:20.650 "data_offset": 2048, 00:09:20.650 "data_size": 63488 00:09:20.650 } 00:09:20.650 ] 00:09:20.650 }' 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.650 11:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.229 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:21.229 11:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:21.229 [2024-11-27 11:47:47.472769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.167 "name": "raid_bdev1", 00:09:22.167 "uuid": "eb39e96a-e22d-4557-9457-26767b2df6f6", 00:09:22.167 "strip_size_kb": 64, 00:09:22.167 "state": "online", 00:09:22.167 "raid_level": "concat", 00:09:22.167 "superblock": true, 00:09:22.167 "num_base_bdevs": 3, 00:09:22.167 "num_base_bdevs_discovered": 3, 00:09:22.167 "num_base_bdevs_operational": 3, 00:09:22.167 "base_bdevs_list": [ 00:09:22.167 { 00:09:22.167 "name": "BaseBdev1", 00:09:22.167 "uuid": "21fd7a2e-4048-5677-8006-bc8da1cca509", 00:09:22.167 "is_configured": true, 00:09:22.167 "data_offset": 2048, 00:09:22.167 "data_size": 63488 00:09:22.167 }, 00:09:22.167 { 00:09:22.167 "name": "BaseBdev2", 00:09:22.167 "uuid": "2cda0dc8-63cd-5a46-ac2a-151644acc923", 00:09:22.167 "is_configured": true, 00:09:22.167 "data_offset": 2048, 00:09:22.167 "data_size": 63488 00:09:22.167 }, 00:09:22.167 { 00:09:22.167 "name": "BaseBdev3", 00:09:22.167 "uuid": "cc29da32-907c-5f29-920c-e7d9dcedb851", 00:09:22.167 "is_configured": true, 00:09:22.167 "data_offset": 2048, 00:09:22.167 "data_size": 63488 00:09:22.167 } 00:09:22.167 ] 00:09:22.167 }' 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.167 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.735 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:22.735 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.735 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.735 [2024-11-27 11:47:48.844959] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:22.735 [2024-11-27 11:47:48.845059] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:22.735 [2024-11-27 11:47:48.847723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.735 [2024-11-27 11:47:48.847826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.735 [2024-11-27 11:47:48.847895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.735 [2024-11-27 11:47:48.847953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:22.735 { 00:09:22.735 "results": [ 00:09:22.735 { 00:09:22.735 "job": "raid_bdev1", 00:09:22.735 "core_mask": "0x1", 00:09:22.735 "workload": "randrw", 00:09:22.735 "percentage": 50, 00:09:22.735 "status": "finished", 00:09:22.735 "queue_depth": 1, 00:09:22.735 "io_size": 131072, 00:09:22.735 "runtime": 1.373184, 00:09:22.735 "iops": 15440.756664802386, 00:09:22.735 "mibps": 1930.0945831002982, 00:09:22.735 "io_failed": 1, 00:09:22.735 "io_timeout": 0, 00:09:22.735 "avg_latency_us": 89.61241324657372, 00:09:22.735 "min_latency_us": 26.1589519650655, 00:09:22.735 "max_latency_us": 1366.5257641921398 00:09:22.735 } 00:09:22.735 ], 00:09:22.735 "core_count": 1 00:09:22.735 } 00:09:22.735 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.735 11:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67234 00:09:22.735 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67234 ']' 00:09:22.735 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67234 00:09:22.735 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:22.735 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.735 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67234 00:09:22.735 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.735 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.735 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67234' 00:09:22.735 killing process with pid 67234 00:09:22.735 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67234 00:09:22.735 [2024-11-27 11:47:48.886238] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.735 11:47:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67234 00:09:22.735 [2024-11-27 11:47:49.114371] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:24.112 11:47:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:24.113 11:47:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4RxKoiBJFg 00:09:24.113 11:47:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:24.113 11:47:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:24.113 11:47:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:24.113 ************************************ 00:09:24.113 END TEST raid_write_error_test 00:09:24.113 ************************************ 00:09:24.113 11:47:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.113 11:47:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:24.113 11:47:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:24.113 00:09:24.113 real 0m4.654s 00:09:24.113 user 0m5.571s 00:09:24.113 sys 0m0.579s 00:09:24.113 11:47:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.113 11:47:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.113 11:47:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:24.113 11:47:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:24.113 11:47:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:24.113 11:47:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.113 11:47:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:24.113 ************************************ 00:09:24.113 START TEST raid_state_function_test 00:09:24.113 ************************************ 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67378 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67378' 00:09:24.113 Process raid pid: 67378 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67378 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67378 ']' 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.113 11:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.113 [2024-11-27 11:47:50.490785] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:09:24.113 [2024-11-27 11:47:50.491035] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.372 [2024-11-27 11:47:50.667620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.631 [2024-11-27 11:47:50.783996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.631 [2024-11-27 11:47:50.986870] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.631 [2024-11-27 11:47:50.986969] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.200 [2024-11-27 11:47:51.331631] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.200 [2024-11-27 11:47:51.331766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.200 [2024-11-27 11:47:51.331782] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:25.200 [2024-11-27 11:47:51.331792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:25.200 [2024-11-27 11:47:51.331799] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:25.200 [2024-11-27 11:47:51.331807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.200 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.200 "name": "Existed_Raid", 00:09:25.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.200 "strip_size_kb": 0, 00:09:25.200 "state": "configuring", 00:09:25.200 "raid_level": "raid1", 00:09:25.200 "superblock": false, 00:09:25.200 "num_base_bdevs": 3, 00:09:25.200 "num_base_bdevs_discovered": 0, 00:09:25.200 "num_base_bdevs_operational": 3, 00:09:25.200 "base_bdevs_list": [ 00:09:25.200 { 00:09:25.200 "name": "BaseBdev1", 00:09:25.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.200 "is_configured": false, 00:09:25.200 "data_offset": 0, 00:09:25.200 "data_size": 0 00:09:25.200 }, 00:09:25.200 { 00:09:25.200 "name": "BaseBdev2", 00:09:25.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.200 "is_configured": false, 00:09:25.200 "data_offset": 0, 00:09:25.200 "data_size": 0 00:09:25.200 }, 00:09:25.200 { 00:09:25.200 "name": "BaseBdev3", 00:09:25.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.200 "is_configured": false, 00:09:25.200 "data_offset": 0, 00:09:25.201 "data_size": 0 00:09:25.201 } 00:09:25.201 ] 00:09:25.201 }' 00:09:25.201 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.201 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.460 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:25.460 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.460 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.460 [2024-11-27 11:47:51.802783] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:25.460 [2024-11-27 11:47:51.802927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:25.460 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.460 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:25.460 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.460 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.460 [2024-11-27 11:47:51.814750] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.460 [2024-11-27 11:47:51.814858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.460 [2024-11-27 11:47:51.814894] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:25.460 [2024-11-27 11:47:51.814922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:25.460 [2024-11-27 11:47:51.814943] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:25.460 [2024-11-27 11:47:51.815009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:25.460 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.460 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:25.460 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.460 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.720 [2024-11-27 11:47:51.865410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.721 BaseBdev1 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.721 [ 00:09:25.721 { 00:09:25.721 "name": "BaseBdev1", 00:09:25.721 "aliases": [ 00:09:25.721 "bd92fb92-d553-4f9f-ad89-f0e5782a1713" 00:09:25.721 ], 00:09:25.721 "product_name": "Malloc disk", 00:09:25.721 "block_size": 512, 00:09:25.721 "num_blocks": 65536, 00:09:25.721 "uuid": "bd92fb92-d553-4f9f-ad89-f0e5782a1713", 00:09:25.721 "assigned_rate_limits": { 00:09:25.721 "rw_ios_per_sec": 0, 00:09:25.721 "rw_mbytes_per_sec": 0, 00:09:25.721 "r_mbytes_per_sec": 0, 00:09:25.721 "w_mbytes_per_sec": 0 00:09:25.721 }, 00:09:25.721 "claimed": true, 00:09:25.721 "claim_type": "exclusive_write", 00:09:25.721 "zoned": false, 00:09:25.721 "supported_io_types": { 00:09:25.721 "read": true, 00:09:25.721 "write": true, 00:09:25.721 "unmap": true, 00:09:25.721 "flush": true, 00:09:25.721 "reset": true, 00:09:25.721 "nvme_admin": false, 00:09:25.721 "nvme_io": false, 00:09:25.721 "nvme_io_md": false, 00:09:25.721 "write_zeroes": true, 00:09:25.721 "zcopy": true, 00:09:25.721 "get_zone_info": false, 00:09:25.721 "zone_management": false, 00:09:25.721 "zone_append": false, 00:09:25.721 "compare": false, 00:09:25.721 "compare_and_write": false, 00:09:25.721 "abort": true, 00:09:25.721 "seek_hole": false, 00:09:25.721 "seek_data": false, 00:09:25.721 "copy": true, 00:09:25.721 "nvme_iov_md": false 00:09:25.721 }, 00:09:25.721 "memory_domains": [ 00:09:25.721 { 00:09:25.721 "dma_device_id": "system", 00:09:25.721 "dma_device_type": 1 00:09:25.721 }, 00:09:25.721 { 00:09:25.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.721 "dma_device_type": 2 00:09:25.721 } 00:09:25.721 ], 00:09:25.721 "driver_specific": {} 00:09:25.721 } 00:09:25.721 ] 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.721 "name": "Existed_Raid", 00:09:25.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.721 "strip_size_kb": 0, 00:09:25.721 "state": "configuring", 00:09:25.721 "raid_level": "raid1", 00:09:25.721 "superblock": false, 00:09:25.721 "num_base_bdevs": 3, 00:09:25.721 "num_base_bdevs_discovered": 1, 00:09:25.721 "num_base_bdevs_operational": 3, 00:09:25.721 "base_bdevs_list": [ 00:09:25.721 { 00:09:25.721 "name": "BaseBdev1", 00:09:25.721 "uuid": "bd92fb92-d553-4f9f-ad89-f0e5782a1713", 00:09:25.721 "is_configured": true, 00:09:25.721 "data_offset": 0, 00:09:25.721 "data_size": 65536 00:09:25.721 }, 00:09:25.721 { 00:09:25.721 "name": "BaseBdev2", 00:09:25.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.721 "is_configured": false, 00:09:25.721 "data_offset": 0, 00:09:25.721 "data_size": 0 00:09:25.721 }, 00:09:25.721 { 00:09:25.721 "name": "BaseBdev3", 00:09:25.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.721 "is_configured": false, 00:09:25.721 "data_offset": 0, 00:09:25.721 "data_size": 0 00:09:25.721 } 00:09:25.721 ] 00:09:25.721 }' 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.721 11:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.293 [2024-11-27 11:47:52.420552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:26.293 [2024-11-27 11:47:52.420616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.293 [2024-11-27 11:47:52.428578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.293 [2024-11-27 11:47:52.430600] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:26.293 [2024-11-27 11:47:52.430641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:26.293 [2024-11-27 11:47:52.430651] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:26.293 [2024-11-27 11:47:52.430660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.293 "name": "Existed_Raid", 00:09:26.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.293 "strip_size_kb": 0, 00:09:26.293 "state": "configuring", 00:09:26.293 "raid_level": "raid1", 00:09:26.293 "superblock": false, 00:09:26.293 "num_base_bdevs": 3, 00:09:26.293 "num_base_bdevs_discovered": 1, 00:09:26.293 "num_base_bdevs_operational": 3, 00:09:26.293 "base_bdevs_list": [ 00:09:26.293 { 00:09:26.293 "name": "BaseBdev1", 00:09:26.293 "uuid": "bd92fb92-d553-4f9f-ad89-f0e5782a1713", 00:09:26.293 "is_configured": true, 00:09:26.293 "data_offset": 0, 00:09:26.293 "data_size": 65536 00:09:26.293 }, 00:09:26.293 { 00:09:26.293 "name": "BaseBdev2", 00:09:26.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.293 "is_configured": false, 00:09:26.293 "data_offset": 0, 00:09:26.293 "data_size": 0 00:09:26.293 }, 00:09:26.293 { 00:09:26.293 "name": "BaseBdev3", 00:09:26.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.293 "is_configured": false, 00:09:26.293 "data_offset": 0, 00:09:26.293 "data_size": 0 00:09:26.293 } 00:09:26.293 ] 00:09:26.293 }' 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.293 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.553 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:26.553 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.553 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.553 [2024-11-27 11:47:52.875919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.553 BaseBdev2 00:09:26.553 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.553 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:26.553 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:26.553 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.553 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:26.553 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.553 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.553 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.553 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.553 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.553 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.553 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:26.553 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.553 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.553 [ 00:09:26.553 { 00:09:26.553 "name": "BaseBdev2", 00:09:26.553 "aliases": [ 00:09:26.553 "ffdaf294-90a1-4c67-8404-aa515b3ac584" 00:09:26.553 ], 00:09:26.553 "product_name": "Malloc disk", 00:09:26.553 "block_size": 512, 00:09:26.553 "num_blocks": 65536, 00:09:26.553 "uuid": "ffdaf294-90a1-4c67-8404-aa515b3ac584", 00:09:26.553 "assigned_rate_limits": { 00:09:26.553 "rw_ios_per_sec": 0, 00:09:26.553 "rw_mbytes_per_sec": 0, 00:09:26.553 "r_mbytes_per_sec": 0, 00:09:26.554 "w_mbytes_per_sec": 0 00:09:26.554 }, 00:09:26.554 "claimed": true, 00:09:26.554 "claim_type": "exclusive_write", 00:09:26.554 "zoned": false, 00:09:26.554 "supported_io_types": { 00:09:26.554 "read": true, 00:09:26.554 "write": true, 00:09:26.554 "unmap": true, 00:09:26.554 "flush": true, 00:09:26.554 "reset": true, 00:09:26.554 "nvme_admin": false, 00:09:26.554 "nvme_io": false, 00:09:26.554 "nvme_io_md": false, 00:09:26.554 "write_zeroes": true, 00:09:26.554 "zcopy": true, 00:09:26.554 "get_zone_info": false, 00:09:26.554 "zone_management": false, 00:09:26.554 "zone_append": false, 00:09:26.554 "compare": false, 00:09:26.554 "compare_and_write": false, 00:09:26.554 "abort": true, 00:09:26.554 "seek_hole": false, 00:09:26.554 "seek_data": false, 00:09:26.554 "copy": true, 00:09:26.554 "nvme_iov_md": false 00:09:26.554 }, 00:09:26.554 "memory_domains": [ 00:09:26.554 { 00:09:26.554 "dma_device_id": "system", 00:09:26.554 "dma_device_type": 1 00:09:26.554 }, 00:09:26.554 { 00:09:26.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.554 "dma_device_type": 2 00:09:26.554 } 00:09:26.554 ], 00:09:26.554 "driver_specific": {} 00:09:26.554 } 00:09:26.554 ] 00:09:26.554 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.554 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:26.554 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:26.554 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:26.554 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.554 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.554 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.554 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.554 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.554 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.554 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.554 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.554 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.554 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.554 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.554 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.554 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.554 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.814 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.814 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.814 "name": "Existed_Raid", 00:09:26.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.814 "strip_size_kb": 0, 00:09:26.814 "state": "configuring", 00:09:26.814 "raid_level": "raid1", 00:09:26.814 "superblock": false, 00:09:26.814 "num_base_bdevs": 3, 00:09:26.814 "num_base_bdevs_discovered": 2, 00:09:26.814 "num_base_bdevs_operational": 3, 00:09:26.814 "base_bdevs_list": [ 00:09:26.814 { 00:09:26.814 "name": "BaseBdev1", 00:09:26.814 "uuid": "bd92fb92-d553-4f9f-ad89-f0e5782a1713", 00:09:26.814 "is_configured": true, 00:09:26.814 "data_offset": 0, 00:09:26.814 "data_size": 65536 00:09:26.814 }, 00:09:26.814 { 00:09:26.814 "name": "BaseBdev2", 00:09:26.814 "uuid": "ffdaf294-90a1-4c67-8404-aa515b3ac584", 00:09:26.814 "is_configured": true, 00:09:26.814 "data_offset": 0, 00:09:26.814 "data_size": 65536 00:09:26.814 }, 00:09:26.814 { 00:09:26.814 "name": "BaseBdev3", 00:09:26.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.814 "is_configured": false, 00:09:26.814 "data_offset": 0, 00:09:26.814 "data_size": 0 00:09:26.814 } 00:09:26.814 ] 00:09:26.814 }' 00:09:26.814 11:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.814 11:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.074 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:27.074 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.074 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.333 [2024-11-27 11:47:53.465063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.333 [2024-11-27 11:47:53.465127] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:27.333 [2024-11-27 11:47:53.465141] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:27.333 [2024-11-27 11:47:53.465447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:27.333 [2024-11-27 11:47:53.465634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:27.333 [2024-11-27 11:47:53.465645] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:27.333 [2024-11-27 11:47:53.465984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.333 BaseBdev3 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.333 [ 00:09:27.333 { 00:09:27.333 "name": "BaseBdev3", 00:09:27.333 "aliases": [ 00:09:27.333 "aaafcd78-1f85-4fb7-84fd-36e8b3913dfa" 00:09:27.333 ], 00:09:27.333 "product_name": "Malloc disk", 00:09:27.333 "block_size": 512, 00:09:27.333 "num_blocks": 65536, 00:09:27.333 "uuid": "aaafcd78-1f85-4fb7-84fd-36e8b3913dfa", 00:09:27.333 "assigned_rate_limits": { 00:09:27.333 "rw_ios_per_sec": 0, 00:09:27.333 "rw_mbytes_per_sec": 0, 00:09:27.333 "r_mbytes_per_sec": 0, 00:09:27.333 "w_mbytes_per_sec": 0 00:09:27.333 }, 00:09:27.333 "claimed": true, 00:09:27.333 "claim_type": "exclusive_write", 00:09:27.333 "zoned": false, 00:09:27.333 "supported_io_types": { 00:09:27.333 "read": true, 00:09:27.333 "write": true, 00:09:27.333 "unmap": true, 00:09:27.333 "flush": true, 00:09:27.333 "reset": true, 00:09:27.333 "nvme_admin": false, 00:09:27.333 "nvme_io": false, 00:09:27.333 "nvme_io_md": false, 00:09:27.333 "write_zeroes": true, 00:09:27.333 "zcopy": true, 00:09:27.333 "get_zone_info": false, 00:09:27.333 "zone_management": false, 00:09:27.333 "zone_append": false, 00:09:27.333 "compare": false, 00:09:27.333 "compare_and_write": false, 00:09:27.333 "abort": true, 00:09:27.333 "seek_hole": false, 00:09:27.333 "seek_data": false, 00:09:27.333 "copy": true, 00:09:27.333 "nvme_iov_md": false 00:09:27.333 }, 00:09:27.333 "memory_domains": [ 00:09:27.333 { 00:09:27.333 "dma_device_id": "system", 00:09:27.333 "dma_device_type": 1 00:09:27.333 }, 00:09:27.333 { 00:09:27.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.333 "dma_device_type": 2 00:09:27.333 } 00:09:27.333 ], 00:09:27.333 "driver_specific": {} 00:09:27.333 } 00:09:27.333 ] 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.333 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.334 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.334 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.334 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.334 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.334 "name": "Existed_Raid", 00:09:27.334 "uuid": "559da106-4605-4fab-a244-7637adb65229", 00:09:27.334 "strip_size_kb": 0, 00:09:27.334 "state": "online", 00:09:27.334 "raid_level": "raid1", 00:09:27.334 "superblock": false, 00:09:27.334 "num_base_bdevs": 3, 00:09:27.334 "num_base_bdevs_discovered": 3, 00:09:27.334 "num_base_bdevs_operational": 3, 00:09:27.334 "base_bdevs_list": [ 00:09:27.334 { 00:09:27.334 "name": "BaseBdev1", 00:09:27.334 "uuid": "bd92fb92-d553-4f9f-ad89-f0e5782a1713", 00:09:27.334 "is_configured": true, 00:09:27.334 "data_offset": 0, 00:09:27.334 "data_size": 65536 00:09:27.334 }, 00:09:27.334 { 00:09:27.334 "name": "BaseBdev2", 00:09:27.334 "uuid": "ffdaf294-90a1-4c67-8404-aa515b3ac584", 00:09:27.334 "is_configured": true, 00:09:27.334 "data_offset": 0, 00:09:27.334 "data_size": 65536 00:09:27.334 }, 00:09:27.334 { 00:09:27.334 "name": "BaseBdev3", 00:09:27.334 "uuid": "aaafcd78-1f85-4fb7-84fd-36e8b3913dfa", 00:09:27.334 "is_configured": true, 00:09:27.334 "data_offset": 0, 00:09:27.334 "data_size": 65536 00:09:27.334 } 00:09:27.334 ] 00:09:27.334 }' 00:09:27.334 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.334 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.593 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:27.593 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:27.593 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:27.593 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:27.593 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:27.593 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:27.593 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:27.593 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:27.593 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.593 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.593 [2024-11-27 11:47:53.964730] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.852 11:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.852 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:27.852 "name": "Existed_Raid", 00:09:27.852 "aliases": [ 00:09:27.852 "559da106-4605-4fab-a244-7637adb65229" 00:09:27.852 ], 00:09:27.852 "product_name": "Raid Volume", 00:09:27.852 "block_size": 512, 00:09:27.852 "num_blocks": 65536, 00:09:27.852 "uuid": "559da106-4605-4fab-a244-7637adb65229", 00:09:27.852 "assigned_rate_limits": { 00:09:27.853 "rw_ios_per_sec": 0, 00:09:27.853 "rw_mbytes_per_sec": 0, 00:09:27.853 "r_mbytes_per_sec": 0, 00:09:27.853 "w_mbytes_per_sec": 0 00:09:27.853 }, 00:09:27.853 "claimed": false, 00:09:27.853 "zoned": false, 00:09:27.853 "supported_io_types": { 00:09:27.853 "read": true, 00:09:27.853 "write": true, 00:09:27.853 "unmap": false, 00:09:27.853 "flush": false, 00:09:27.853 "reset": true, 00:09:27.853 "nvme_admin": false, 00:09:27.853 "nvme_io": false, 00:09:27.853 "nvme_io_md": false, 00:09:27.853 "write_zeroes": true, 00:09:27.853 "zcopy": false, 00:09:27.853 "get_zone_info": false, 00:09:27.853 "zone_management": false, 00:09:27.853 "zone_append": false, 00:09:27.853 "compare": false, 00:09:27.853 "compare_and_write": false, 00:09:27.853 "abort": false, 00:09:27.853 "seek_hole": false, 00:09:27.853 "seek_data": false, 00:09:27.853 "copy": false, 00:09:27.853 "nvme_iov_md": false 00:09:27.853 }, 00:09:27.853 "memory_domains": [ 00:09:27.853 { 00:09:27.853 "dma_device_id": "system", 00:09:27.853 "dma_device_type": 1 00:09:27.853 }, 00:09:27.853 { 00:09:27.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.853 "dma_device_type": 2 00:09:27.853 }, 00:09:27.853 { 00:09:27.853 "dma_device_id": "system", 00:09:27.853 "dma_device_type": 1 00:09:27.853 }, 00:09:27.853 { 00:09:27.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.853 "dma_device_type": 2 00:09:27.853 }, 00:09:27.853 { 00:09:27.853 "dma_device_id": "system", 00:09:27.853 "dma_device_type": 1 00:09:27.853 }, 00:09:27.853 { 00:09:27.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.853 "dma_device_type": 2 00:09:27.853 } 00:09:27.853 ], 00:09:27.853 "driver_specific": { 00:09:27.853 "raid": { 00:09:27.853 "uuid": "559da106-4605-4fab-a244-7637adb65229", 00:09:27.853 "strip_size_kb": 0, 00:09:27.853 "state": "online", 00:09:27.853 "raid_level": "raid1", 00:09:27.853 "superblock": false, 00:09:27.853 "num_base_bdevs": 3, 00:09:27.853 "num_base_bdevs_discovered": 3, 00:09:27.853 "num_base_bdevs_operational": 3, 00:09:27.853 "base_bdevs_list": [ 00:09:27.853 { 00:09:27.853 "name": "BaseBdev1", 00:09:27.853 "uuid": "bd92fb92-d553-4f9f-ad89-f0e5782a1713", 00:09:27.853 "is_configured": true, 00:09:27.853 "data_offset": 0, 00:09:27.853 "data_size": 65536 00:09:27.853 }, 00:09:27.853 { 00:09:27.853 "name": "BaseBdev2", 00:09:27.853 "uuid": "ffdaf294-90a1-4c67-8404-aa515b3ac584", 00:09:27.853 "is_configured": true, 00:09:27.853 "data_offset": 0, 00:09:27.853 "data_size": 65536 00:09:27.853 }, 00:09:27.853 { 00:09:27.853 "name": "BaseBdev3", 00:09:27.853 "uuid": "aaafcd78-1f85-4fb7-84fd-36e8b3913dfa", 00:09:27.853 "is_configured": true, 00:09:27.853 "data_offset": 0, 00:09:27.853 "data_size": 65536 00:09:27.853 } 00:09:27.853 ] 00:09:27.853 } 00:09:27.853 } 00:09:27.853 }' 00:09:27.853 11:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:27.853 BaseBdev2 00:09:27.853 BaseBdev3' 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.853 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.112 [2024-11-27 11:47:54.236016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.112 "name": "Existed_Raid", 00:09:28.112 "uuid": "559da106-4605-4fab-a244-7637adb65229", 00:09:28.112 "strip_size_kb": 0, 00:09:28.112 "state": "online", 00:09:28.112 "raid_level": "raid1", 00:09:28.112 "superblock": false, 00:09:28.112 "num_base_bdevs": 3, 00:09:28.112 "num_base_bdevs_discovered": 2, 00:09:28.112 "num_base_bdevs_operational": 2, 00:09:28.112 "base_bdevs_list": [ 00:09:28.112 { 00:09:28.112 "name": null, 00:09:28.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.112 "is_configured": false, 00:09:28.112 "data_offset": 0, 00:09:28.112 "data_size": 65536 00:09:28.112 }, 00:09:28.112 { 00:09:28.112 "name": "BaseBdev2", 00:09:28.112 "uuid": "ffdaf294-90a1-4c67-8404-aa515b3ac584", 00:09:28.112 "is_configured": true, 00:09:28.112 "data_offset": 0, 00:09:28.112 "data_size": 65536 00:09:28.112 }, 00:09:28.112 { 00:09:28.112 "name": "BaseBdev3", 00:09:28.112 "uuid": "aaafcd78-1f85-4fb7-84fd-36e8b3913dfa", 00:09:28.112 "is_configured": true, 00:09:28.112 "data_offset": 0, 00:09:28.112 "data_size": 65536 00:09:28.112 } 00:09:28.112 ] 00:09:28.112 }' 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.112 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.683 [2024-11-27 11:47:54.861537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.683 11:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.683 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:28.683 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:28.683 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:28.683 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.683 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.683 [2024-11-27 11:47:55.026902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:28.683 [2024-11-27 11:47:55.027019] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:28.970 [2024-11-27 11:47:55.134603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:28.970 [2024-11-27 11:47:55.134664] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:28.970 [2024-11-27 11:47:55.134677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.970 BaseBdev2 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.970 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.971 [ 00:09:28.971 { 00:09:28.971 "name": "BaseBdev2", 00:09:28.971 "aliases": [ 00:09:28.971 "b3397ed4-214e-4c50-9643-9d2fa55db07d" 00:09:28.971 ], 00:09:28.971 "product_name": "Malloc disk", 00:09:28.971 "block_size": 512, 00:09:28.971 "num_blocks": 65536, 00:09:28.971 "uuid": "b3397ed4-214e-4c50-9643-9d2fa55db07d", 00:09:28.971 "assigned_rate_limits": { 00:09:28.971 "rw_ios_per_sec": 0, 00:09:28.971 "rw_mbytes_per_sec": 0, 00:09:28.971 "r_mbytes_per_sec": 0, 00:09:28.971 "w_mbytes_per_sec": 0 00:09:28.971 }, 00:09:28.971 "claimed": false, 00:09:28.971 "zoned": false, 00:09:28.971 "supported_io_types": { 00:09:28.971 "read": true, 00:09:28.971 "write": true, 00:09:28.971 "unmap": true, 00:09:28.971 "flush": true, 00:09:28.971 "reset": true, 00:09:28.971 "nvme_admin": false, 00:09:28.971 "nvme_io": false, 00:09:28.971 "nvme_io_md": false, 00:09:28.971 "write_zeroes": true, 00:09:28.971 "zcopy": true, 00:09:28.971 "get_zone_info": false, 00:09:28.971 "zone_management": false, 00:09:28.971 "zone_append": false, 00:09:28.971 "compare": false, 00:09:28.971 "compare_and_write": false, 00:09:28.971 "abort": true, 00:09:28.971 "seek_hole": false, 00:09:28.971 "seek_data": false, 00:09:28.971 "copy": true, 00:09:28.971 "nvme_iov_md": false 00:09:28.971 }, 00:09:28.971 "memory_domains": [ 00:09:28.971 { 00:09:28.971 "dma_device_id": "system", 00:09:28.971 "dma_device_type": 1 00:09:28.971 }, 00:09:28.971 { 00:09:28.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.971 "dma_device_type": 2 00:09:28.971 } 00:09:28.971 ], 00:09:28.971 "driver_specific": {} 00:09:28.971 } 00:09:28.971 ] 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.971 BaseBdev3 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.971 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.971 [ 00:09:28.971 { 00:09:28.971 "name": "BaseBdev3", 00:09:28.971 "aliases": [ 00:09:28.971 "8c37c546-c973-496e-b469-c92bbd98308c" 00:09:28.971 ], 00:09:28.971 "product_name": "Malloc disk", 00:09:28.971 "block_size": 512, 00:09:28.971 "num_blocks": 65536, 00:09:28.971 "uuid": "8c37c546-c973-496e-b469-c92bbd98308c", 00:09:28.971 "assigned_rate_limits": { 00:09:28.971 "rw_ios_per_sec": 0, 00:09:28.971 "rw_mbytes_per_sec": 0, 00:09:28.971 "r_mbytes_per_sec": 0, 00:09:28.971 "w_mbytes_per_sec": 0 00:09:28.971 }, 00:09:28.971 "claimed": false, 00:09:28.971 "zoned": false, 00:09:28.971 "supported_io_types": { 00:09:28.971 "read": true, 00:09:28.971 "write": true, 00:09:28.971 "unmap": true, 00:09:28.971 "flush": true, 00:09:29.234 "reset": true, 00:09:29.234 "nvme_admin": false, 00:09:29.234 "nvme_io": false, 00:09:29.234 "nvme_io_md": false, 00:09:29.234 "write_zeroes": true, 00:09:29.234 "zcopy": true, 00:09:29.234 "get_zone_info": false, 00:09:29.234 "zone_management": false, 00:09:29.234 "zone_append": false, 00:09:29.234 "compare": false, 00:09:29.234 "compare_and_write": false, 00:09:29.234 "abort": true, 00:09:29.234 "seek_hole": false, 00:09:29.234 "seek_data": false, 00:09:29.234 "copy": true, 00:09:29.234 "nvme_iov_md": false 00:09:29.234 }, 00:09:29.234 "memory_domains": [ 00:09:29.234 { 00:09:29.234 "dma_device_id": "system", 00:09:29.234 "dma_device_type": 1 00:09:29.234 }, 00:09:29.234 { 00:09:29.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.234 "dma_device_type": 2 00:09:29.234 } 00:09:29.234 ], 00:09:29.234 "driver_specific": {} 00:09:29.234 } 00:09:29.234 ] 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.234 [2024-11-27 11:47:55.368197] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.234 [2024-11-27 11:47:55.368254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.234 [2024-11-27 11:47:55.368300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.234 [2024-11-27 11:47:55.370408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.234 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.234 "name": "Existed_Raid", 00:09:29.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.234 "strip_size_kb": 0, 00:09:29.234 "state": "configuring", 00:09:29.234 "raid_level": "raid1", 00:09:29.235 "superblock": false, 00:09:29.235 "num_base_bdevs": 3, 00:09:29.235 "num_base_bdevs_discovered": 2, 00:09:29.235 "num_base_bdevs_operational": 3, 00:09:29.235 "base_bdevs_list": [ 00:09:29.235 { 00:09:29.235 "name": "BaseBdev1", 00:09:29.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.235 "is_configured": false, 00:09:29.235 "data_offset": 0, 00:09:29.235 "data_size": 0 00:09:29.235 }, 00:09:29.235 { 00:09:29.235 "name": "BaseBdev2", 00:09:29.235 "uuid": "b3397ed4-214e-4c50-9643-9d2fa55db07d", 00:09:29.235 "is_configured": true, 00:09:29.235 "data_offset": 0, 00:09:29.235 "data_size": 65536 00:09:29.235 }, 00:09:29.235 { 00:09:29.235 "name": "BaseBdev3", 00:09:29.235 "uuid": "8c37c546-c973-496e-b469-c92bbd98308c", 00:09:29.235 "is_configured": true, 00:09:29.235 "data_offset": 0, 00:09:29.235 "data_size": 65536 00:09:29.235 } 00:09:29.235 ] 00:09:29.235 }' 00:09:29.235 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.235 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.493 [2024-11-27 11:47:55.819661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.493 "name": "Existed_Raid", 00:09:29.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.493 "strip_size_kb": 0, 00:09:29.493 "state": "configuring", 00:09:29.493 "raid_level": "raid1", 00:09:29.493 "superblock": false, 00:09:29.493 "num_base_bdevs": 3, 00:09:29.493 "num_base_bdevs_discovered": 1, 00:09:29.493 "num_base_bdevs_operational": 3, 00:09:29.493 "base_bdevs_list": [ 00:09:29.493 { 00:09:29.493 "name": "BaseBdev1", 00:09:29.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.493 "is_configured": false, 00:09:29.493 "data_offset": 0, 00:09:29.493 "data_size": 0 00:09:29.493 }, 00:09:29.493 { 00:09:29.493 "name": null, 00:09:29.493 "uuid": "b3397ed4-214e-4c50-9643-9d2fa55db07d", 00:09:29.493 "is_configured": false, 00:09:29.493 "data_offset": 0, 00:09:29.493 "data_size": 65536 00:09:29.493 }, 00:09:29.493 { 00:09:29.493 "name": "BaseBdev3", 00:09:29.493 "uuid": "8c37c546-c973-496e-b469-c92bbd98308c", 00:09:29.493 "is_configured": true, 00:09:29.493 "data_offset": 0, 00:09:29.493 "data_size": 65536 00:09:29.493 } 00:09:29.493 ] 00:09:29.493 }' 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.493 11:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.063 [2024-11-27 11:47:56.378067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.063 BaseBdev1 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.063 [ 00:09:30.063 { 00:09:30.063 "name": "BaseBdev1", 00:09:30.063 "aliases": [ 00:09:30.063 "1d1dea06-5862-41d2-8023-df2041fbf70d" 00:09:30.063 ], 00:09:30.063 "product_name": "Malloc disk", 00:09:30.063 "block_size": 512, 00:09:30.063 "num_blocks": 65536, 00:09:30.063 "uuid": "1d1dea06-5862-41d2-8023-df2041fbf70d", 00:09:30.063 "assigned_rate_limits": { 00:09:30.063 "rw_ios_per_sec": 0, 00:09:30.063 "rw_mbytes_per_sec": 0, 00:09:30.063 "r_mbytes_per_sec": 0, 00:09:30.063 "w_mbytes_per_sec": 0 00:09:30.063 }, 00:09:30.063 "claimed": true, 00:09:30.063 "claim_type": "exclusive_write", 00:09:30.063 "zoned": false, 00:09:30.063 "supported_io_types": { 00:09:30.063 "read": true, 00:09:30.063 "write": true, 00:09:30.063 "unmap": true, 00:09:30.063 "flush": true, 00:09:30.063 "reset": true, 00:09:30.063 "nvme_admin": false, 00:09:30.063 "nvme_io": false, 00:09:30.063 "nvme_io_md": false, 00:09:30.063 "write_zeroes": true, 00:09:30.063 "zcopy": true, 00:09:30.063 "get_zone_info": false, 00:09:30.063 "zone_management": false, 00:09:30.063 "zone_append": false, 00:09:30.063 "compare": false, 00:09:30.063 "compare_and_write": false, 00:09:30.063 "abort": true, 00:09:30.063 "seek_hole": false, 00:09:30.063 "seek_data": false, 00:09:30.063 "copy": true, 00:09:30.063 "nvme_iov_md": false 00:09:30.063 }, 00:09:30.063 "memory_domains": [ 00:09:30.063 { 00:09:30.063 "dma_device_id": "system", 00:09:30.063 "dma_device_type": 1 00:09:30.063 }, 00:09:30.063 { 00:09:30.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.063 "dma_device_type": 2 00:09:30.063 } 00:09:30.063 ], 00:09:30.063 "driver_specific": {} 00:09:30.063 } 00:09:30.063 ] 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.063 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.064 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.064 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.064 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.064 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.064 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.064 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.064 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.064 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.064 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.064 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.064 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.322 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.323 "name": "Existed_Raid", 00:09:30.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.323 "strip_size_kb": 0, 00:09:30.323 "state": "configuring", 00:09:30.323 "raid_level": "raid1", 00:09:30.323 "superblock": false, 00:09:30.323 "num_base_bdevs": 3, 00:09:30.323 "num_base_bdevs_discovered": 2, 00:09:30.323 "num_base_bdevs_operational": 3, 00:09:30.323 "base_bdevs_list": [ 00:09:30.323 { 00:09:30.323 "name": "BaseBdev1", 00:09:30.323 "uuid": "1d1dea06-5862-41d2-8023-df2041fbf70d", 00:09:30.323 "is_configured": true, 00:09:30.323 "data_offset": 0, 00:09:30.323 "data_size": 65536 00:09:30.323 }, 00:09:30.323 { 00:09:30.323 "name": null, 00:09:30.323 "uuid": "b3397ed4-214e-4c50-9643-9d2fa55db07d", 00:09:30.323 "is_configured": false, 00:09:30.323 "data_offset": 0, 00:09:30.323 "data_size": 65536 00:09:30.323 }, 00:09:30.323 { 00:09:30.323 "name": "BaseBdev3", 00:09:30.323 "uuid": "8c37c546-c973-496e-b469-c92bbd98308c", 00:09:30.323 "is_configured": true, 00:09:30.323 "data_offset": 0, 00:09:30.323 "data_size": 65536 00:09:30.323 } 00:09:30.323 ] 00:09:30.323 }' 00:09:30.323 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.323 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.582 [2024-11-27 11:47:56.933203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.582 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.842 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.842 "name": "Existed_Raid", 00:09:30.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.842 "strip_size_kb": 0, 00:09:30.842 "state": "configuring", 00:09:30.842 "raid_level": "raid1", 00:09:30.842 "superblock": false, 00:09:30.842 "num_base_bdevs": 3, 00:09:30.842 "num_base_bdevs_discovered": 1, 00:09:30.842 "num_base_bdevs_operational": 3, 00:09:30.842 "base_bdevs_list": [ 00:09:30.842 { 00:09:30.842 "name": "BaseBdev1", 00:09:30.842 "uuid": "1d1dea06-5862-41d2-8023-df2041fbf70d", 00:09:30.842 "is_configured": true, 00:09:30.842 "data_offset": 0, 00:09:30.842 "data_size": 65536 00:09:30.842 }, 00:09:30.842 { 00:09:30.842 "name": null, 00:09:30.842 "uuid": "b3397ed4-214e-4c50-9643-9d2fa55db07d", 00:09:30.842 "is_configured": false, 00:09:30.842 "data_offset": 0, 00:09:30.842 "data_size": 65536 00:09:30.842 }, 00:09:30.842 { 00:09:30.842 "name": null, 00:09:30.842 "uuid": "8c37c546-c973-496e-b469-c92bbd98308c", 00:09:30.842 "is_configured": false, 00:09:30.842 "data_offset": 0, 00:09:30.842 "data_size": 65536 00:09:30.842 } 00:09:30.842 ] 00:09:30.842 }' 00:09:30.842 11:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.842 11:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.102 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.103 [2024-11-27 11:47:57.460371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.103 11:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.362 11:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.362 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.362 "name": "Existed_Raid", 00:09:31.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.362 "strip_size_kb": 0, 00:09:31.362 "state": "configuring", 00:09:31.362 "raid_level": "raid1", 00:09:31.362 "superblock": false, 00:09:31.362 "num_base_bdevs": 3, 00:09:31.362 "num_base_bdevs_discovered": 2, 00:09:31.362 "num_base_bdevs_operational": 3, 00:09:31.362 "base_bdevs_list": [ 00:09:31.362 { 00:09:31.362 "name": "BaseBdev1", 00:09:31.362 "uuid": "1d1dea06-5862-41d2-8023-df2041fbf70d", 00:09:31.362 "is_configured": true, 00:09:31.362 "data_offset": 0, 00:09:31.362 "data_size": 65536 00:09:31.362 }, 00:09:31.362 { 00:09:31.362 "name": null, 00:09:31.362 "uuid": "b3397ed4-214e-4c50-9643-9d2fa55db07d", 00:09:31.362 "is_configured": false, 00:09:31.362 "data_offset": 0, 00:09:31.362 "data_size": 65536 00:09:31.362 }, 00:09:31.362 { 00:09:31.362 "name": "BaseBdev3", 00:09:31.362 "uuid": "8c37c546-c973-496e-b469-c92bbd98308c", 00:09:31.362 "is_configured": true, 00:09:31.362 "data_offset": 0, 00:09:31.362 "data_size": 65536 00:09:31.362 } 00:09:31.362 ] 00:09:31.362 }' 00:09:31.362 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.362 11:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.621 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.621 11:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.621 11:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.621 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:31.621 11:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.621 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:31.621 11:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:31.621 11:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.621 11:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.621 [2024-11-27 11:47:57.963650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.881 "name": "Existed_Raid", 00:09:31.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.881 "strip_size_kb": 0, 00:09:31.881 "state": "configuring", 00:09:31.881 "raid_level": "raid1", 00:09:31.881 "superblock": false, 00:09:31.881 "num_base_bdevs": 3, 00:09:31.881 "num_base_bdevs_discovered": 1, 00:09:31.881 "num_base_bdevs_operational": 3, 00:09:31.881 "base_bdevs_list": [ 00:09:31.881 { 00:09:31.881 "name": null, 00:09:31.881 "uuid": "1d1dea06-5862-41d2-8023-df2041fbf70d", 00:09:31.881 "is_configured": false, 00:09:31.881 "data_offset": 0, 00:09:31.881 "data_size": 65536 00:09:31.881 }, 00:09:31.881 { 00:09:31.881 "name": null, 00:09:31.881 "uuid": "b3397ed4-214e-4c50-9643-9d2fa55db07d", 00:09:31.881 "is_configured": false, 00:09:31.881 "data_offset": 0, 00:09:31.881 "data_size": 65536 00:09:31.881 }, 00:09:31.881 { 00:09:31.881 "name": "BaseBdev3", 00:09:31.881 "uuid": "8c37c546-c973-496e-b469-c92bbd98308c", 00:09:31.881 "is_configured": true, 00:09:31.881 "data_offset": 0, 00:09:31.881 "data_size": 65536 00:09:31.881 } 00:09:31.881 ] 00:09:31.881 }' 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.881 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.140 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.140 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.140 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.140 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:32.140 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.399 [2024-11-27 11:47:58.563561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.399 "name": "Existed_Raid", 00:09:32.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.399 "strip_size_kb": 0, 00:09:32.399 "state": "configuring", 00:09:32.399 "raid_level": "raid1", 00:09:32.399 "superblock": false, 00:09:32.399 "num_base_bdevs": 3, 00:09:32.399 "num_base_bdevs_discovered": 2, 00:09:32.399 "num_base_bdevs_operational": 3, 00:09:32.399 "base_bdevs_list": [ 00:09:32.399 { 00:09:32.399 "name": null, 00:09:32.399 "uuid": "1d1dea06-5862-41d2-8023-df2041fbf70d", 00:09:32.399 "is_configured": false, 00:09:32.399 "data_offset": 0, 00:09:32.399 "data_size": 65536 00:09:32.399 }, 00:09:32.399 { 00:09:32.399 "name": "BaseBdev2", 00:09:32.399 "uuid": "b3397ed4-214e-4c50-9643-9d2fa55db07d", 00:09:32.399 "is_configured": true, 00:09:32.399 "data_offset": 0, 00:09:32.399 "data_size": 65536 00:09:32.399 }, 00:09:32.399 { 00:09:32.399 "name": "BaseBdev3", 00:09:32.399 "uuid": "8c37c546-c973-496e-b469-c92bbd98308c", 00:09:32.399 "is_configured": true, 00:09:32.399 "data_offset": 0, 00:09:32.399 "data_size": 65536 00:09:32.399 } 00:09:32.399 ] 00:09:32.399 }' 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.399 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.657 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:32.657 11:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.657 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.657 11:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.657 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.658 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:32.658 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.658 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.658 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:32.658 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.916 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.916 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1d1dea06-5862-41d2-8023-df2041fbf70d 00:09:32.916 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.916 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.916 [2024-11-27 11:47:59.119729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:32.916 [2024-11-27 11:47:59.119801] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:32.916 [2024-11-27 11:47:59.119810] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:32.916 [2024-11-27 11:47:59.120106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:32.916 [2024-11-27 11:47:59.120318] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:32.916 [2024-11-27 11:47:59.120340] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:32.916 [2024-11-27 11:47:59.120650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.916 NewBaseBdev 00:09:32.916 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.916 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:32.916 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:32.916 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.916 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:32.916 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.916 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.916 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.916 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.916 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.916 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.916 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:32.916 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.916 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.916 [ 00:09:32.916 { 00:09:32.916 "name": "NewBaseBdev", 00:09:32.916 "aliases": [ 00:09:32.916 "1d1dea06-5862-41d2-8023-df2041fbf70d" 00:09:32.916 ], 00:09:32.916 "product_name": "Malloc disk", 00:09:32.916 "block_size": 512, 00:09:32.916 "num_blocks": 65536, 00:09:32.916 "uuid": "1d1dea06-5862-41d2-8023-df2041fbf70d", 00:09:32.916 "assigned_rate_limits": { 00:09:32.916 "rw_ios_per_sec": 0, 00:09:32.917 "rw_mbytes_per_sec": 0, 00:09:32.917 "r_mbytes_per_sec": 0, 00:09:32.917 "w_mbytes_per_sec": 0 00:09:32.917 }, 00:09:32.917 "claimed": true, 00:09:32.917 "claim_type": "exclusive_write", 00:09:32.917 "zoned": false, 00:09:32.917 "supported_io_types": { 00:09:32.917 "read": true, 00:09:32.917 "write": true, 00:09:32.917 "unmap": true, 00:09:32.917 "flush": true, 00:09:32.917 "reset": true, 00:09:32.917 "nvme_admin": false, 00:09:32.917 "nvme_io": false, 00:09:32.917 "nvme_io_md": false, 00:09:32.917 "write_zeroes": true, 00:09:32.917 "zcopy": true, 00:09:32.917 "get_zone_info": false, 00:09:32.917 "zone_management": false, 00:09:32.917 "zone_append": false, 00:09:32.917 "compare": false, 00:09:32.917 "compare_and_write": false, 00:09:32.917 "abort": true, 00:09:32.917 "seek_hole": false, 00:09:32.917 "seek_data": false, 00:09:32.917 "copy": true, 00:09:32.917 "nvme_iov_md": false 00:09:32.917 }, 00:09:32.917 "memory_domains": [ 00:09:32.917 { 00:09:32.917 "dma_device_id": "system", 00:09:32.917 "dma_device_type": 1 00:09:32.917 }, 00:09:32.917 { 00:09:32.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.917 "dma_device_type": 2 00:09:32.917 } 00:09:32.917 ], 00:09:32.917 "driver_specific": {} 00:09:32.917 } 00:09:32.917 ] 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.917 "name": "Existed_Raid", 00:09:32.917 "uuid": "dc08d7c4-2cea-40ea-811c-8da33cfc5429", 00:09:32.917 "strip_size_kb": 0, 00:09:32.917 "state": "online", 00:09:32.917 "raid_level": "raid1", 00:09:32.917 "superblock": false, 00:09:32.917 "num_base_bdevs": 3, 00:09:32.917 "num_base_bdevs_discovered": 3, 00:09:32.917 "num_base_bdevs_operational": 3, 00:09:32.917 "base_bdevs_list": [ 00:09:32.917 { 00:09:32.917 "name": "NewBaseBdev", 00:09:32.917 "uuid": "1d1dea06-5862-41d2-8023-df2041fbf70d", 00:09:32.917 "is_configured": true, 00:09:32.917 "data_offset": 0, 00:09:32.917 "data_size": 65536 00:09:32.917 }, 00:09:32.917 { 00:09:32.917 "name": "BaseBdev2", 00:09:32.917 "uuid": "b3397ed4-214e-4c50-9643-9d2fa55db07d", 00:09:32.917 "is_configured": true, 00:09:32.917 "data_offset": 0, 00:09:32.917 "data_size": 65536 00:09:32.917 }, 00:09:32.917 { 00:09:32.917 "name": "BaseBdev3", 00:09:32.917 "uuid": "8c37c546-c973-496e-b469-c92bbd98308c", 00:09:32.917 "is_configured": true, 00:09:32.917 "data_offset": 0, 00:09:32.917 "data_size": 65536 00:09:32.917 } 00:09:32.917 ] 00:09:32.917 }' 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.917 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:33.529 [2024-11-27 11:47:59.607482] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.529 "name": "Existed_Raid", 00:09:33.529 "aliases": [ 00:09:33.529 "dc08d7c4-2cea-40ea-811c-8da33cfc5429" 00:09:33.529 ], 00:09:33.529 "product_name": "Raid Volume", 00:09:33.529 "block_size": 512, 00:09:33.529 "num_blocks": 65536, 00:09:33.529 "uuid": "dc08d7c4-2cea-40ea-811c-8da33cfc5429", 00:09:33.529 "assigned_rate_limits": { 00:09:33.529 "rw_ios_per_sec": 0, 00:09:33.529 "rw_mbytes_per_sec": 0, 00:09:33.529 "r_mbytes_per_sec": 0, 00:09:33.529 "w_mbytes_per_sec": 0 00:09:33.529 }, 00:09:33.529 "claimed": false, 00:09:33.529 "zoned": false, 00:09:33.529 "supported_io_types": { 00:09:33.529 "read": true, 00:09:33.529 "write": true, 00:09:33.529 "unmap": false, 00:09:33.529 "flush": false, 00:09:33.529 "reset": true, 00:09:33.529 "nvme_admin": false, 00:09:33.529 "nvme_io": false, 00:09:33.529 "nvme_io_md": false, 00:09:33.529 "write_zeroes": true, 00:09:33.529 "zcopy": false, 00:09:33.529 "get_zone_info": false, 00:09:33.529 "zone_management": false, 00:09:33.529 "zone_append": false, 00:09:33.529 "compare": false, 00:09:33.529 "compare_and_write": false, 00:09:33.529 "abort": false, 00:09:33.529 "seek_hole": false, 00:09:33.529 "seek_data": false, 00:09:33.529 "copy": false, 00:09:33.529 "nvme_iov_md": false 00:09:33.529 }, 00:09:33.529 "memory_domains": [ 00:09:33.529 { 00:09:33.529 "dma_device_id": "system", 00:09:33.529 "dma_device_type": 1 00:09:33.529 }, 00:09:33.529 { 00:09:33.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.529 "dma_device_type": 2 00:09:33.529 }, 00:09:33.529 { 00:09:33.529 "dma_device_id": "system", 00:09:33.529 "dma_device_type": 1 00:09:33.529 }, 00:09:33.529 { 00:09:33.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.529 "dma_device_type": 2 00:09:33.529 }, 00:09:33.529 { 00:09:33.529 "dma_device_id": "system", 00:09:33.529 "dma_device_type": 1 00:09:33.529 }, 00:09:33.529 { 00:09:33.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.529 "dma_device_type": 2 00:09:33.529 } 00:09:33.529 ], 00:09:33.529 "driver_specific": { 00:09:33.529 "raid": { 00:09:33.529 "uuid": "dc08d7c4-2cea-40ea-811c-8da33cfc5429", 00:09:33.529 "strip_size_kb": 0, 00:09:33.529 "state": "online", 00:09:33.529 "raid_level": "raid1", 00:09:33.529 "superblock": false, 00:09:33.529 "num_base_bdevs": 3, 00:09:33.529 "num_base_bdevs_discovered": 3, 00:09:33.529 "num_base_bdevs_operational": 3, 00:09:33.529 "base_bdevs_list": [ 00:09:33.529 { 00:09:33.529 "name": "NewBaseBdev", 00:09:33.529 "uuid": "1d1dea06-5862-41d2-8023-df2041fbf70d", 00:09:33.529 "is_configured": true, 00:09:33.529 "data_offset": 0, 00:09:33.529 "data_size": 65536 00:09:33.529 }, 00:09:33.529 { 00:09:33.529 "name": "BaseBdev2", 00:09:33.529 "uuid": "b3397ed4-214e-4c50-9643-9d2fa55db07d", 00:09:33.529 "is_configured": true, 00:09:33.529 "data_offset": 0, 00:09:33.529 "data_size": 65536 00:09:33.529 }, 00:09:33.529 { 00:09:33.529 "name": "BaseBdev3", 00:09:33.529 "uuid": "8c37c546-c973-496e-b469-c92bbd98308c", 00:09:33.529 "is_configured": true, 00:09:33.529 "data_offset": 0, 00:09:33.529 "data_size": 65536 00:09:33.529 } 00:09:33.529 ] 00:09:33.529 } 00:09:33.529 } 00:09:33.529 }' 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:33.529 BaseBdev2 00:09:33.529 BaseBdev3' 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.529 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.787 [2024-11-27 11:47:59.914549] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.787 [2024-11-27 11:47:59.914603] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.787 [2024-11-27 11:47:59.914709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.787 [2024-11-27 11:47:59.915074] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.787 [2024-11-27 11:47:59.915097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:33.787 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.787 11:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67378 00:09:33.787 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67378 ']' 00:09:33.787 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67378 00:09:33.787 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:33.787 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.787 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67378 00:09:33.787 killing process with pid 67378 00:09:33.787 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.787 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.787 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67378' 00:09:33.787 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67378 00:09:33.787 [2024-11-27 11:47:59.958096] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:33.787 11:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67378 00:09:34.046 [2024-11-27 11:48:00.305626] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:35.424 11:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:35.424 00:09:35.424 real 0m11.139s 00:09:35.424 user 0m17.579s 00:09:35.424 sys 0m1.951s 00:09:35.424 11:48:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.424 11:48:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.424 ************************************ 00:09:35.424 END TEST raid_state_function_test 00:09:35.424 ************************************ 00:09:35.424 11:48:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:35.424 11:48:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:35.424 11:48:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.424 11:48:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:35.424 ************************************ 00:09:35.424 START TEST raid_state_function_test_sb 00:09:35.424 ************************************ 00:09:35.424 11:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:35.424 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:35.424 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:35.424 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:35.424 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:35.424 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:35.424 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68006 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68006' 00:09:35.425 Process raid pid: 68006 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68006 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68006 ']' 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.425 11:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.425 [2024-11-27 11:48:01.700956] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:09:35.425 [2024-11-27 11:48:01.701113] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.684 [2024-11-27 11:48:01.868275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.684 [2024-11-27 11:48:01.989219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.941 [2024-11-27 11:48:02.199556] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.941 [2024-11-27 11:48:02.199615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.507 11:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.507 11:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:36.507 11:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:36.507 11:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.507 11:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.507 [2024-11-27 11:48:02.612924] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.507 [2024-11-27 11:48:02.612987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.507 [2024-11-27 11:48:02.613005] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:36.507 [2024-11-27 11:48:02.613017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:36.507 [2024-11-27 11:48:02.613025] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:36.507 [2024-11-27 11:48:02.613035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:36.507 11:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.507 11:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.507 11:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.507 11:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.507 11:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.508 11:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.508 11:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.508 11:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.508 11:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.508 11:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.508 11:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.508 11:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.508 11:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.508 11:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.508 11:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.508 11:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.508 11:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.508 "name": "Existed_Raid", 00:09:36.508 "uuid": "957d3ab8-9531-484a-8bbf-f637642b9431", 00:09:36.508 "strip_size_kb": 0, 00:09:36.508 "state": "configuring", 00:09:36.508 "raid_level": "raid1", 00:09:36.508 "superblock": true, 00:09:36.508 "num_base_bdevs": 3, 00:09:36.508 "num_base_bdevs_discovered": 0, 00:09:36.508 "num_base_bdevs_operational": 3, 00:09:36.508 "base_bdevs_list": [ 00:09:36.508 { 00:09:36.508 "name": "BaseBdev1", 00:09:36.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.508 "is_configured": false, 00:09:36.508 "data_offset": 0, 00:09:36.508 "data_size": 0 00:09:36.508 }, 00:09:36.508 { 00:09:36.508 "name": "BaseBdev2", 00:09:36.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.508 "is_configured": false, 00:09:36.508 "data_offset": 0, 00:09:36.508 "data_size": 0 00:09:36.508 }, 00:09:36.508 { 00:09:36.508 "name": "BaseBdev3", 00:09:36.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.508 "is_configured": false, 00:09:36.508 "data_offset": 0, 00:09:36.508 "data_size": 0 00:09:36.508 } 00:09:36.508 ] 00:09:36.508 }' 00:09:36.508 11:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.508 11:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.765 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:36.765 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.765 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.765 [2024-11-27 11:48:03.096032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:36.765 [2024-11-27 11:48:03.096083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:36.765 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.765 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:36.765 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.765 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.765 [2024-11-27 11:48:03.108023] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.765 [2024-11-27 11:48:03.108075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.765 [2024-11-27 11:48:03.108086] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:36.765 [2024-11-27 11:48:03.108096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:36.765 [2024-11-27 11:48:03.108103] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:36.765 [2024-11-27 11:48:03.108113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:36.766 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.766 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:36.766 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.766 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.024 [2024-11-27 11:48:03.158249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.024 BaseBdev1 00:09:37.024 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.024 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:37.024 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:37.024 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.024 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.024 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.024 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.024 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.024 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.024 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.024 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.024 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.024 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.024 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.024 [ 00:09:37.024 { 00:09:37.024 "name": "BaseBdev1", 00:09:37.024 "aliases": [ 00:09:37.024 "6b6ece39-5e22-4c6d-aef9-ba88062052b7" 00:09:37.024 ], 00:09:37.024 "product_name": "Malloc disk", 00:09:37.024 "block_size": 512, 00:09:37.024 "num_blocks": 65536, 00:09:37.024 "uuid": "6b6ece39-5e22-4c6d-aef9-ba88062052b7", 00:09:37.024 "assigned_rate_limits": { 00:09:37.024 "rw_ios_per_sec": 0, 00:09:37.024 "rw_mbytes_per_sec": 0, 00:09:37.024 "r_mbytes_per_sec": 0, 00:09:37.024 "w_mbytes_per_sec": 0 00:09:37.024 }, 00:09:37.024 "claimed": true, 00:09:37.024 "claim_type": "exclusive_write", 00:09:37.024 "zoned": false, 00:09:37.024 "supported_io_types": { 00:09:37.024 "read": true, 00:09:37.024 "write": true, 00:09:37.024 "unmap": true, 00:09:37.024 "flush": true, 00:09:37.024 "reset": true, 00:09:37.024 "nvme_admin": false, 00:09:37.024 "nvme_io": false, 00:09:37.024 "nvme_io_md": false, 00:09:37.024 "write_zeroes": true, 00:09:37.024 "zcopy": true, 00:09:37.024 "get_zone_info": false, 00:09:37.024 "zone_management": false, 00:09:37.024 "zone_append": false, 00:09:37.024 "compare": false, 00:09:37.024 "compare_and_write": false, 00:09:37.024 "abort": true, 00:09:37.024 "seek_hole": false, 00:09:37.024 "seek_data": false, 00:09:37.024 "copy": true, 00:09:37.024 "nvme_iov_md": false 00:09:37.024 }, 00:09:37.024 "memory_domains": [ 00:09:37.024 { 00:09:37.024 "dma_device_id": "system", 00:09:37.024 "dma_device_type": 1 00:09:37.024 }, 00:09:37.024 { 00:09:37.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.025 "dma_device_type": 2 00:09:37.025 } 00:09:37.025 ], 00:09:37.025 "driver_specific": {} 00:09:37.025 } 00:09:37.025 ] 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.025 "name": "Existed_Raid", 00:09:37.025 "uuid": "80b5e0f2-7eff-4b4a-8bae-80db94ba375d", 00:09:37.025 "strip_size_kb": 0, 00:09:37.025 "state": "configuring", 00:09:37.025 "raid_level": "raid1", 00:09:37.025 "superblock": true, 00:09:37.025 "num_base_bdevs": 3, 00:09:37.025 "num_base_bdevs_discovered": 1, 00:09:37.025 "num_base_bdevs_operational": 3, 00:09:37.025 "base_bdevs_list": [ 00:09:37.025 { 00:09:37.025 "name": "BaseBdev1", 00:09:37.025 "uuid": "6b6ece39-5e22-4c6d-aef9-ba88062052b7", 00:09:37.025 "is_configured": true, 00:09:37.025 "data_offset": 2048, 00:09:37.025 "data_size": 63488 00:09:37.025 }, 00:09:37.025 { 00:09:37.025 "name": "BaseBdev2", 00:09:37.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.025 "is_configured": false, 00:09:37.025 "data_offset": 0, 00:09:37.025 "data_size": 0 00:09:37.025 }, 00:09:37.025 { 00:09:37.025 "name": "BaseBdev3", 00:09:37.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.025 "is_configured": false, 00:09:37.025 "data_offset": 0, 00:09:37.025 "data_size": 0 00:09:37.025 } 00:09:37.025 ] 00:09:37.025 }' 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.025 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.283 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.283 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.284 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.284 [2024-11-27 11:48:03.649475] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.284 [2024-11-27 11:48:03.649545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:37.284 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.284 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:37.284 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.284 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.284 [2024-11-27 11:48:03.661513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.284 [2024-11-27 11:48:03.663551] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.284 [2024-11-27 11:48:03.663595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.284 [2024-11-27 11:48:03.663622] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:37.284 [2024-11-27 11:48:03.663633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.543 "name": "Existed_Raid", 00:09:37.543 "uuid": "048139ed-aa8d-4dbf-a2d7-9fea5f78a7c7", 00:09:37.543 "strip_size_kb": 0, 00:09:37.543 "state": "configuring", 00:09:37.543 "raid_level": "raid1", 00:09:37.543 "superblock": true, 00:09:37.543 "num_base_bdevs": 3, 00:09:37.543 "num_base_bdevs_discovered": 1, 00:09:37.543 "num_base_bdevs_operational": 3, 00:09:37.543 "base_bdevs_list": [ 00:09:37.543 { 00:09:37.543 "name": "BaseBdev1", 00:09:37.543 "uuid": "6b6ece39-5e22-4c6d-aef9-ba88062052b7", 00:09:37.543 "is_configured": true, 00:09:37.543 "data_offset": 2048, 00:09:37.543 "data_size": 63488 00:09:37.543 }, 00:09:37.543 { 00:09:37.543 "name": "BaseBdev2", 00:09:37.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.543 "is_configured": false, 00:09:37.543 "data_offset": 0, 00:09:37.543 "data_size": 0 00:09:37.543 }, 00:09:37.543 { 00:09:37.543 "name": "BaseBdev3", 00:09:37.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.543 "is_configured": false, 00:09:37.543 "data_offset": 0, 00:09:37.543 "data_size": 0 00:09:37.543 } 00:09:37.543 ] 00:09:37.543 }' 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.543 11:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.802 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:37.802 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.802 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.061 [2024-11-27 11:48:04.188075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.061 BaseBdev2 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.061 [ 00:09:38.061 { 00:09:38.061 "name": "BaseBdev2", 00:09:38.061 "aliases": [ 00:09:38.061 "ab71f6c6-e34a-4e06-b672-efb60948e16e" 00:09:38.061 ], 00:09:38.061 "product_name": "Malloc disk", 00:09:38.061 "block_size": 512, 00:09:38.061 "num_blocks": 65536, 00:09:38.061 "uuid": "ab71f6c6-e34a-4e06-b672-efb60948e16e", 00:09:38.061 "assigned_rate_limits": { 00:09:38.061 "rw_ios_per_sec": 0, 00:09:38.061 "rw_mbytes_per_sec": 0, 00:09:38.061 "r_mbytes_per_sec": 0, 00:09:38.061 "w_mbytes_per_sec": 0 00:09:38.061 }, 00:09:38.061 "claimed": true, 00:09:38.061 "claim_type": "exclusive_write", 00:09:38.061 "zoned": false, 00:09:38.061 "supported_io_types": { 00:09:38.061 "read": true, 00:09:38.061 "write": true, 00:09:38.061 "unmap": true, 00:09:38.061 "flush": true, 00:09:38.061 "reset": true, 00:09:38.061 "nvme_admin": false, 00:09:38.061 "nvme_io": false, 00:09:38.061 "nvme_io_md": false, 00:09:38.061 "write_zeroes": true, 00:09:38.061 "zcopy": true, 00:09:38.061 "get_zone_info": false, 00:09:38.061 "zone_management": false, 00:09:38.061 "zone_append": false, 00:09:38.061 "compare": false, 00:09:38.061 "compare_and_write": false, 00:09:38.061 "abort": true, 00:09:38.061 "seek_hole": false, 00:09:38.061 "seek_data": false, 00:09:38.061 "copy": true, 00:09:38.061 "nvme_iov_md": false 00:09:38.061 }, 00:09:38.061 "memory_domains": [ 00:09:38.061 { 00:09:38.061 "dma_device_id": "system", 00:09:38.061 "dma_device_type": 1 00:09:38.061 }, 00:09:38.061 { 00:09:38.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.061 "dma_device_type": 2 00:09:38.061 } 00:09:38.061 ], 00:09:38.061 "driver_specific": {} 00:09:38.061 } 00:09:38.061 ] 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.061 "name": "Existed_Raid", 00:09:38.061 "uuid": "048139ed-aa8d-4dbf-a2d7-9fea5f78a7c7", 00:09:38.061 "strip_size_kb": 0, 00:09:38.061 "state": "configuring", 00:09:38.061 "raid_level": "raid1", 00:09:38.061 "superblock": true, 00:09:38.061 "num_base_bdevs": 3, 00:09:38.061 "num_base_bdevs_discovered": 2, 00:09:38.061 "num_base_bdevs_operational": 3, 00:09:38.061 "base_bdevs_list": [ 00:09:38.061 { 00:09:38.061 "name": "BaseBdev1", 00:09:38.061 "uuid": "6b6ece39-5e22-4c6d-aef9-ba88062052b7", 00:09:38.061 "is_configured": true, 00:09:38.061 "data_offset": 2048, 00:09:38.061 "data_size": 63488 00:09:38.061 }, 00:09:38.061 { 00:09:38.061 "name": "BaseBdev2", 00:09:38.061 "uuid": "ab71f6c6-e34a-4e06-b672-efb60948e16e", 00:09:38.061 "is_configured": true, 00:09:38.061 "data_offset": 2048, 00:09:38.061 "data_size": 63488 00:09:38.061 }, 00:09:38.061 { 00:09:38.061 "name": "BaseBdev3", 00:09:38.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.061 "is_configured": false, 00:09:38.061 "data_offset": 0, 00:09:38.061 "data_size": 0 00:09:38.061 } 00:09:38.061 ] 00:09:38.061 }' 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.061 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.320 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:38.320 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.320 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.580 [2024-11-27 11:48:04.735874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.580 [2024-11-27 11:48:04.736167] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:38.580 [2024-11-27 11:48:04.736199] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:38.580 BaseBdev3 00:09:38.580 [2024-11-27 11:48:04.736503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:38.580 [2024-11-27 11:48:04.736674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:38.580 [2024-11-27 11:48:04.736685] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:38.580 [2024-11-27 11:48:04.736864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.580 [ 00:09:38.580 { 00:09:38.580 "name": "BaseBdev3", 00:09:38.580 "aliases": [ 00:09:38.580 "0f4faf60-d7a4-4625-8086-1e2b61c0dd1a" 00:09:38.580 ], 00:09:38.580 "product_name": "Malloc disk", 00:09:38.580 "block_size": 512, 00:09:38.580 "num_blocks": 65536, 00:09:38.580 "uuid": "0f4faf60-d7a4-4625-8086-1e2b61c0dd1a", 00:09:38.580 "assigned_rate_limits": { 00:09:38.580 "rw_ios_per_sec": 0, 00:09:38.580 "rw_mbytes_per_sec": 0, 00:09:38.580 "r_mbytes_per_sec": 0, 00:09:38.580 "w_mbytes_per_sec": 0 00:09:38.580 }, 00:09:38.580 "claimed": true, 00:09:38.580 "claim_type": "exclusive_write", 00:09:38.580 "zoned": false, 00:09:38.580 "supported_io_types": { 00:09:38.580 "read": true, 00:09:38.580 "write": true, 00:09:38.580 "unmap": true, 00:09:38.580 "flush": true, 00:09:38.580 "reset": true, 00:09:38.580 "nvme_admin": false, 00:09:38.580 "nvme_io": false, 00:09:38.580 "nvme_io_md": false, 00:09:38.580 "write_zeroes": true, 00:09:38.580 "zcopy": true, 00:09:38.580 "get_zone_info": false, 00:09:38.580 "zone_management": false, 00:09:38.580 "zone_append": false, 00:09:38.580 "compare": false, 00:09:38.580 "compare_and_write": false, 00:09:38.580 "abort": true, 00:09:38.580 "seek_hole": false, 00:09:38.580 "seek_data": false, 00:09:38.580 "copy": true, 00:09:38.580 "nvme_iov_md": false 00:09:38.580 }, 00:09:38.580 "memory_domains": [ 00:09:38.580 { 00:09:38.580 "dma_device_id": "system", 00:09:38.580 "dma_device_type": 1 00:09:38.580 }, 00:09:38.580 { 00:09:38.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.580 "dma_device_type": 2 00:09:38.580 } 00:09:38.580 ], 00:09:38.580 "driver_specific": {} 00:09:38.580 } 00:09:38.580 ] 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.580 "name": "Existed_Raid", 00:09:38.580 "uuid": "048139ed-aa8d-4dbf-a2d7-9fea5f78a7c7", 00:09:38.580 "strip_size_kb": 0, 00:09:38.580 "state": "online", 00:09:38.580 "raid_level": "raid1", 00:09:38.580 "superblock": true, 00:09:38.580 "num_base_bdevs": 3, 00:09:38.580 "num_base_bdevs_discovered": 3, 00:09:38.580 "num_base_bdevs_operational": 3, 00:09:38.580 "base_bdevs_list": [ 00:09:38.580 { 00:09:38.580 "name": "BaseBdev1", 00:09:38.580 "uuid": "6b6ece39-5e22-4c6d-aef9-ba88062052b7", 00:09:38.580 "is_configured": true, 00:09:38.580 "data_offset": 2048, 00:09:38.580 "data_size": 63488 00:09:38.580 }, 00:09:38.580 { 00:09:38.580 "name": "BaseBdev2", 00:09:38.580 "uuid": "ab71f6c6-e34a-4e06-b672-efb60948e16e", 00:09:38.580 "is_configured": true, 00:09:38.580 "data_offset": 2048, 00:09:38.580 "data_size": 63488 00:09:38.580 }, 00:09:38.580 { 00:09:38.580 "name": "BaseBdev3", 00:09:38.580 "uuid": "0f4faf60-d7a4-4625-8086-1e2b61c0dd1a", 00:09:38.580 "is_configured": true, 00:09:38.580 "data_offset": 2048, 00:09:38.580 "data_size": 63488 00:09:38.580 } 00:09:38.580 ] 00:09:38.580 }' 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.580 11:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.167 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:39.167 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:39.167 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.167 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.167 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.167 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.167 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.167 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:39.167 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.167 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.167 [2024-11-27 11:48:05.259383] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.167 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.167 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.167 "name": "Existed_Raid", 00:09:39.167 "aliases": [ 00:09:39.167 "048139ed-aa8d-4dbf-a2d7-9fea5f78a7c7" 00:09:39.167 ], 00:09:39.167 "product_name": "Raid Volume", 00:09:39.167 "block_size": 512, 00:09:39.167 "num_blocks": 63488, 00:09:39.167 "uuid": "048139ed-aa8d-4dbf-a2d7-9fea5f78a7c7", 00:09:39.167 "assigned_rate_limits": { 00:09:39.167 "rw_ios_per_sec": 0, 00:09:39.167 "rw_mbytes_per_sec": 0, 00:09:39.167 "r_mbytes_per_sec": 0, 00:09:39.167 "w_mbytes_per_sec": 0 00:09:39.167 }, 00:09:39.167 "claimed": false, 00:09:39.167 "zoned": false, 00:09:39.167 "supported_io_types": { 00:09:39.167 "read": true, 00:09:39.168 "write": true, 00:09:39.168 "unmap": false, 00:09:39.168 "flush": false, 00:09:39.168 "reset": true, 00:09:39.168 "nvme_admin": false, 00:09:39.168 "nvme_io": false, 00:09:39.168 "nvme_io_md": false, 00:09:39.168 "write_zeroes": true, 00:09:39.168 "zcopy": false, 00:09:39.168 "get_zone_info": false, 00:09:39.168 "zone_management": false, 00:09:39.168 "zone_append": false, 00:09:39.168 "compare": false, 00:09:39.168 "compare_and_write": false, 00:09:39.168 "abort": false, 00:09:39.168 "seek_hole": false, 00:09:39.168 "seek_data": false, 00:09:39.168 "copy": false, 00:09:39.168 "nvme_iov_md": false 00:09:39.168 }, 00:09:39.168 "memory_domains": [ 00:09:39.168 { 00:09:39.168 "dma_device_id": "system", 00:09:39.168 "dma_device_type": 1 00:09:39.168 }, 00:09:39.168 { 00:09:39.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.168 "dma_device_type": 2 00:09:39.168 }, 00:09:39.168 { 00:09:39.168 "dma_device_id": "system", 00:09:39.168 "dma_device_type": 1 00:09:39.168 }, 00:09:39.168 { 00:09:39.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.168 "dma_device_type": 2 00:09:39.168 }, 00:09:39.168 { 00:09:39.168 "dma_device_id": "system", 00:09:39.168 "dma_device_type": 1 00:09:39.168 }, 00:09:39.168 { 00:09:39.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.168 "dma_device_type": 2 00:09:39.168 } 00:09:39.168 ], 00:09:39.168 "driver_specific": { 00:09:39.168 "raid": { 00:09:39.168 "uuid": "048139ed-aa8d-4dbf-a2d7-9fea5f78a7c7", 00:09:39.168 "strip_size_kb": 0, 00:09:39.168 "state": "online", 00:09:39.168 "raid_level": "raid1", 00:09:39.168 "superblock": true, 00:09:39.168 "num_base_bdevs": 3, 00:09:39.168 "num_base_bdevs_discovered": 3, 00:09:39.168 "num_base_bdevs_operational": 3, 00:09:39.168 "base_bdevs_list": [ 00:09:39.168 { 00:09:39.168 "name": "BaseBdev1", 00:09:39.168 "uuid": "6b6ece39-5e22-4c6d-aef9-ba88062052b7", 00:09:39.168 "is_configured": true, 00:09:39.168 "data_offset": 2048, 00:09:39.168 "data_size": 63488 00:09:39.168 }, 00:09:39.168 { 00:09:39.168 "name": "BaseBdev2", 00:09:39.168 "uuid": "ab71f6c6-e34a-4e06-b672-efb60948e16e", 00:09:39.168 "is_configured": true, 00:09:39.168 "data_offset": 2048, 00:09:39.168 "data_size": 63488 00:09:39.168 }, 00:09:39.168 { 00:09:39.168 "name": "BaseBdev3", 00:09:39.168 "uuid": "0f4faf60-d7a4-4625-8086-1e2b61c0dd1a", 00:09:39.168 "is_configured": true, 00:09:39.168 "data_offset": 2048, 00:09:39.168 "data_size": 63488 00:09:39.168 } 00:09:39.168 ] 00:09:39.168 } 00:09:39.168 } 00:09:39.168 }' 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:39.168 BaseBdev2 00:09:39.168 BaseBdev3' 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.168 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.168 [2024-11-27 11:48:05.498710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.427 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.427 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:39.427 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:39.427 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:39.427 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:39.427 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:39.427 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:39.427 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.427 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.427 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.427 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.427 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.427 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.427 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.427 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.427 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.427 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.428 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.428 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.428 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.428 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.428 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.428 "name": "Existed_Raid", 00:09:39.428 "uuid": "048139ed-aa8d-4dbf-a2d7-9fea5f78a7c7", 00:09:39.428 "strip_size_kb": 0, 00:09:39.428 "state": "online", 00:09:39.428 "raid_level": "raid1", 00:09:39.428 "superblock": true, 00:09:39.428 "num_base_bdevs": 3, 00:09:39.428 "num_base_bdevs_discovered": 2, 00:09:39.428 "num_base_bdevs_operational": 2, 00:09:39.428 "base_bdevs_list": [ 00:09:39.428 { 00:09:39.428 "name": null, 00:09:39.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.428 "is_configured": false, 00:09:39.428 "data_offset": 0, 00:09:39.428 "data_size": 63488 00:09:39.428 }, 00:09:39.428 { 00:09:39.428 "name": "BaseBdev2", 00:09:39.428 "uuid": "ab71f6c6-e34a-4e06-b672-efb60948e16e", 00:09:39.428 "is_configured": true, 00:09:39.428 "data_offset": 2048, 00:09:39.428 "data_size": 63488 00:09:39.428 }, 00:09:39.428 { 00:09:39.428 "name": "BaseBdev3", 00:09:39.428 "uuid": "0f4faf60-d7a4-4625-8086-1e2b61c0dd1a", 00:09:39.428 "is_configured": true, 00:09:39.428 "data_offset": 2048, 00:09:39.428 "data_size": 63488 00:09:39.428 } 00:09:39.428 ] 00:09:39.428 }' 00:09:39.428 11:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.428 11:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.997 [2024-11-27 11:48:06.127146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.997 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.997 [2024-11-27 11:48:06.295421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:39.997 [2024-11-27 11:48:06.295658] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.258 [2024-11-27 11:48:06.402924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.258 [2024-11-27 11:48:06.403092] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.258 [2024-11-27 11:48:06.403142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.258 BaseBdev2 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.258 [ 00:09:40.258 { 00:09:40.258 "name": "BaseBdev2", 00:09:40.258 "aliases": [ 00:09:40.258 "3782bd89-8e59-43c5-9a04-6932d17330ff" 00:09:40.258 ], 00:09:40.258 "product_name": "Malloc disk", 00:09:40.258 "block_size": 512, 00:09:40.258 "num_blocks": 65536, 00:09:40.258 "uuid": "3782bd89-8e59-43c5-9a04-6932d17330ff", 00:09:40.258 "assigned_rate_limits": { 00:09:40.258 "rw_ios_per_sec": 0, 00:09:40.258 "rw_mbytes_per_sec": 0, 00:09:40.258 "r_mbytes_per_sec": 0, 00:09:40.258 "w_mbytes_per_sec": 0 00:09:40.258 }, 00:09:40.258 "claimed": false, 00:09:40.258 "zoned": false, 00:09:40.258 "supported_io_types": { 00:09:40.258 "read": true, 00:09:40.258 "write": true, 00:09:40.258 "unmap": true, 00:09:40.258 "flush": true, 00:09:40.258 "reset": true, 00:09:40.258 "nvme_admin": false, 00:09:40.258 "nvme_io": false, 00:09:40.258 "nvme_io_md": false, 00:09:40.258 "write_zeroes": true, 00:09:40.258 "zcopy": true, 00:09:40.258 "get_zone_info": false, 00:09:40.258 "zone_management": false, 00:09:40.258 "zone_append": false, 00:09:40.258 "compare": false, 00:09:40.258 "compare_and_write": false, 00:09:40.258 "abort": true, 00:09:40.258 "seek_hole": false, 00:09:40.258 "seek_data": false, 00:09:40.258 "copy": true, 00:09:40.258 "nvme_iov_md": false 00:09:40.258 }, 00:09:40.258 "memory_domains": [ 00:09:40.258 { 00:09:40.258 "dma_device_id": "system", 00:09:40.258 "dma_device_type": 1 00:09:40.258 }, 00:09:40.258 { 00:09:40.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.258 "dma_device_type": 2 00:09:40.258 } 00:09:40.258 ], 00:09:40.258 "driver_specific": {} 00:09:40.258 } 00:09:40.258 ] 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.258 BaseBdev3 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.258 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.259 [ 00:09:40.259 { 00:09:40.259 "name": "BaseBdev3", 00:09:40.259 "aliases": [ 00:09:40.259 "344948f6-f493-4a79-9999-26739fe3dcd1" 00:09:40.259 ], 00:09:40.259 "product_name": "Malloc disk", 00:09:40.259 "block_size": 512, 00:09:40.259 "num_blocks": 65536, 00:09:40.259 "uuid": "344948f6-f493-4a79-9999-26739fe3dcd1", 00:09:40.259 "assigned_rate_limits": { 00:09:40.259 "rw_ios_per_sec": 0, 00:09:40.259 "rw_mbytes_per_sec": 0, 00:09:40.259 "r_mbytes_per_sec": 0, 00:09:40.259 "w_mbytes_per_sec": 0 00:09:40.259 }, 00:09:40.259 "claimed": false, 00:09:40.259 "zoned": false, 00:09:40.259 "supported_io_types": { 00:09:40.259 "read": true, 00:09:40.259 "write": true, 00:09:40.259 "unmap": true, 00:09:40.259 "flush": true, 00:09:40.259 "reset": true, 00:09:40.259 "nvme_admin": false, 00:09:40.259 "nvme_io": false, 00:09:40.259 "nvme_io_md": false, 00:09:40.259 "write_zeroes": true, 00:09:40.259 "zcopy": true, 00:09:40.259 "get_zone_info": false, 00:09:40.259 "zone_management": false, 00:09:40.259 "zone_append": false, 00:09:40.259 "compare": false, 00:09:40.259 "compare_and_write": false, 00:09:40.259 "abort": true, 00:09:40.259 "seek_hole": false, 00:09:40.259 "seek_data": false, 00:09:40.259 "copy": true, 00:09:40.259 "nvme_iov_md": false 00:09:40.259 }, 00:09:40.259 "memory_domains": [ 00:09:40.259 { 00:09:40.259 "dma_device_id": "system", 00:09:40.259 "dma_device_type": 1 00:09:40.259 }, 00:09:40.259 { 00:09:40.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.259 "dma_device_type": 2 00:09:40.259 } 00:09:40.259 ], 00:09:40.259 "driver_specific": {} 00:09:40.259 } 00:09:40.259 ] 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.259 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.519 [2024-11-27 11:48:06.640818] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:40.519 [2024-11-27 11:48:06.640902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:40.519 [2024-11-27 11:48:06.640934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.519 [2024-11-27 11:48:06.643093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.519 "name": "Existed_Raid", 00:09:40.519 "uuid": "e4beecf9-71e7-44ec-8b34-af87f355c933", 00:09:40.519 "strip_size_kb": 0, 00:09:40.519 "state": "configuring", 00:09:40.519 "raid_level": "raid1", 00:09:40.519 "superblock": true, 00:09:40.519 "num_base_bdevs": 3, 00:09:40.519 "num_base_bdevs_discovered": 2, 00:09:40.519 "num_base_bdevs_operational": 3, 00:09:40.519 "base_bdevs_list": [ 00:09:40.519 { 00:09:40.519 "name": "BaseBdev1", 00:09:40.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.519 "is_configured": false, 00:09:40.519 "data_offset": 0, 00:09:40.519 "data_size": 0 00:09:40.519 }, 00:09:40.519 { 00:09:40.519 "name": "BaseBdev2", 00:09:40.519 "uuid": "3782bd89-8e59-43c5-9a04-6932d17330ff", 00:09:40.519 "is_configured": true, 00:09:40.519 "data_offset": 2048, 00:09:40.519 "data_size": 63488 00:09:40.519 }, 00:09:40.519 { 00:09:40.519 "name": "BaseBdev3", 00:09:40.519 "uuid": "344948f6-f493-4a79-9999-26739fe3dcd1", 00:09:40.519 "is_configured": true, 00:09:40.519 "data_offset": 2048, 00:09:40.519 "data_size": 63488 00:09:40.519 } 00:09:40.519 ] 00:09:40.519 }' 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.519 11:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.779 [2024-11-27 11:48:07.120065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.779 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.039 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.039 "name": "Existed_Raid", 00:09:41.039 "uuid": "e4beecf9-71e7-44ec-8b34-af87f355c933", 00:09:41.039 "strip_size_kb": 0, 00:09:41.039 "state": "configuring", 00:09:41.039 "raid_level": "raid1", 00:09:41.039 "superblock": true, 00:09:41.039 "num_base_bdevs": 3, 00:09:41.039 "num_base_bdevs_discovered": 1, 00:09:41.039 "num_base_bdevs_operational": 3, 00:09:41.039 "base_bdevs_list": [ 00:09:41.039 { 00:09:41.039 "name": "BaseBdev1", 00:09:41.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.039 "is_configured": false, 00:09:41.039 "data_offset": 0, 00:09:41.039 "data_size": 0 00:09:41.039 }, 00:09:41.039 { 00:09:41.039 "name": null, 00:09:41.039 "uuid": "3782bd89-8e59-43c5-9a04-6932d17330ff", 00:09:41.039 "is_configured": false, 00:09:41.039 "data_offset": 0, 00:09:41.039 "data_size": 63488 00:09:41.039 }, 00:09:41.039 { 00:09:41.039 "name": "BaseBdev3", 00:09:41.039 "uuid": "344948f6-f493-4a79-9999-26739fe3dcd1", 00:09:41.039 "is_configured": true, 00:09:41.039 "data_offset": 2048, 00:09:41.039 "data_size": 63488 00:09:41.039 } 00:09:41.039 ] 00:09:41.039 }' 00:09:41.039 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.039 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.299 [2024-11-27 11:48:07.659348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:41.299 BaseBdev1 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.299 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.559 [ 00:09:41.559 { 00:09:41.559 "name": "BaseBdev1", 00:09:41.559 "aliases": [ 00:09:41.559 "8438337c-8fbe-4679-92c5-7afd585f2c4f" 00:09:41.559 ], 00:09:41.559 "product_name": "Malloc disk", 00:09:41.559 "block_size": 512, 00:09:41.559 "num_blocks": 65536, 00:09:41.559 "uuid": "8438337c-8fbe-4679-92c5-7afd585f2c4f", 00:09:41.559 "assigned_rate_limits": { 00:09:41.559 "rw_ios_per_sec": 0, 00:09:41.559 "rw_mbytes_per_sec": 0, 00:09:41.559 "r_mbytes_per_sec": 0, 00:09:41.559 "w_mbytes_per_sec": 0 00:09:41.559 }, 00:09:41.559 "claimed": true, 00:09:41.559 "claim_type": "exclusive_write", 00:09:41.559 "zoned": false, 00:09:41.559 "supported_io_types": { 00:09:41.559 "read": true, 00:09:41.559 "write": true, 00:09:41.559 "unmap": true, 00:09:41.559 "flush": true, 00:09:41.559 "reset": true, 00:09:41.559 "nvme_admin": false, 00:09:41.559 "nvme_io": false, 00:09:41.559 "nvme_io_md": false, 00:09:41.559 "write_zeroes": true, 00:09:41.559 "zcopy": true, 00:09:41.559 "get_zone_info": false, 00:09:41.559 "zone_management": false, 00:09:41.559 "zone_append": false, 00:09:41.559 "compare": false, 00:09:41.559 "compare_and_write": false, 00:09:41.559 "abort": true, 00:09:41.559 "seek_hole": false, 00:09:41.559 "seek_data": false, 00:09:41.559 "copy": true, 00:09:41.559 "nvme_iov_md": false 00:09:41.559 }, 00:09:41.559 "memory_domains": [ 00:09:41.559 { 00:09:41.559 "dma_device_id": "system", 00:09:41.559 "dma_device_type": 1 00:09:41.559 }, 00:09:41.559 { 00:09:41.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.559 "dma_device_type": 2 00:09:41.559 } 00:09:41.559 ], 00:09:41.559 "driver_specific": {} 00:09:41.559 } 00:09:41.559 ] 00:09:41.559 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.559 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:41.559 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:41.559 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.559 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.559 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.559 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.559 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.559 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.559 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.559 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.559 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.560 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.560 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.560 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.560 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.560 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.560 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.560 "name": "Existed_Raid", 00:09:41.560 "uuid": "e4beecf9-71e7-44ec-8b34-af87f355c933", 00:09:41.560 "strip_size_kb": 0, 00:09:41.560 "state": "configuring", 00:09:41.560 "raid_level": "raid1", 00:09:41.560 "superblock": true, 00:09:41.560 "num_base_bdevs": 3, 00:09:41.560 "num_base_bdevs_discovered": 2, 00:09:41.560 "num_base_bdevs_operational": 3, 00:09:41.560 "base_bdevs_list": [ 00:09:41.560 { 00:09:41.560 "name": "BaseBdev1", 00:09:41.560 "uuid": "8438337c-8fbe-4679-92c5-7afd585f2c4f", 00:09:41.560 "is_configured": true, 00:09:41.560 "data_offset": 2048, 00:09:41.560 "data_size": 63488 00:09:41.560 }, 00:09:41.560 { 00:09:41.560 "name": null, 00:09:41.560 "uuid": "3782bd89-8e59-43c5-9a04-6932d17330ff", 00:09:41.560 "is_configured": false, 00:09:41.560 "data_offset": 0, 00:09:41.560 "data_size": 63488 00:09:41.560 }, 00:09:41.560 { 00:09:41.560 "name": "BaseBdev3", 00:09:41.560 "uuid": "344948f6-f493-4a79-9999-26739fe3dcd1", 00:09:41.560 "is_configured": true, 00:09:41.560 "data_offset": 2048, 00:09:41.560 "data_size": 63488 00:09:41.560 } 00:09:41.560 ] 00:09:41.560 }' 00:09:41.560 11:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.560 11:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.819 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:41.819 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.819 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.819 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.819 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.079 [2024-11-27 11:48:08.218525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.079 "name": "Existed_Raid", 00:09:42.079 "uuid": "e4beecf9-71e7-44ec-8b34-af87f355c933", 00:09:42.079 "strip_size_kb": 0, 00:09:42.079 "state": "configuring", 00:09:42.079 "raid_level": "raid1", 00:09:42.079 "superblock": true, 00:09:42.079 "num_base_bdevs": 3, 00:09:42.079 "num_base_bdevs_discovered": 1, 00:09:42.079 "num_base_bdevs_operational": 3, 00:09:42.079 "base_bdevs_list": [ 00:09:42.079 { 00:09:42.079 "name": "BaseBdev1", 00:09:42.079 "uuid": "8438337c-8fbe-4679-92c5-7afd585f2c4f", 00:09:42.079 "is_configured": true, 00:09:42.079 "data_offset": 2048, 00:09:42.079 "data_size": 63488 00:09:42.079 }, 00:09:42.079 { 00:09:42.079 "name": null, 00:09:42.079 "uuid": "3782bd89-8e59-43c5-9a04-6932d17330ff", 00:09:42.079 "is_configured": false, 00:09:42.079 "data_offset": 0, 00:09:42.079 "data_size": 63488 00:09:42.079 }, 00:09:42.079 { 00:09:42.079 "name": null, 00:09:42.079 "uuid": "344948f6-f493-4a79-9999-26739fe3dcd1", 00:09:42.079 "is_configured": false, 00:09:42.079 "data_offset": 0, 00:09:42.079 "data_size": 63488 00:09:42.079 } 00:09:42.079 ] 00:09:42.079 }' 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.079 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.339 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:42.339 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.339 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.339 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.620 [2024-11-27 11:48:08.749699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.620 "name": "Existed_Raid", 00:09:42.620 "uuid": "e4beecf9-71e7-44ec-8b34-af87f355c933", 00:09:42.620 "strip_size_kb": 0, 00:09:42.620 "state": "configuring", 00:09:42.620 "raid_level": "raid1", 00:09:42.620 "superblock": true, 00:09:42.620 "num_base_bdevs": 3, 00:09:42.620 "num_base_bdevs_discovered": 2, 00:09:42.620 "num_base_bdevs_operational": 3, 00:09:42.620 "base_bdevs_list": [ 00:09:42.620 { 00:09:42.620 "name": "BaseBdev1", 00:09:42.620 "uuid": "8438337c-8fbe-4679-92c5-7afd585f2c4f", 00:09:42.620 "is_configured": true, 00:09:42.620 "data_offset": 2048, 00:09:42.620 "data_size": 63488 00:09:42.620 }, 00:09:42.620 { 00:09:42.620 "name": null, 00:09:42.620 "uuid": "3782bd89-8e59-43c5-9a04-6932d17330ff", 00:09:42.620 "is_configured": false, 00:09:42.620 "data_offset": 0, 00:09:42.620 "data_size": 63488 00:09:42.620 }, 00:09:42.620 { 00:09:42.620 "name": "BaseBdev3", 00:09:42.620 "uuid": "344948f6-f493-4a79-9999-26739fe3dcd1", 00:09:42.620 "is_configured": true, 00:09:42.620 "data_offset": 2048, 00:09:42.620 "data_size": 63488 00:09:42.620 } 00:09:42.620 ] 00:09:42.620 }' 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.620 11:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.879 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.879 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:42.879 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.879 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.879 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.879 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.138 [2024-11-27 11:48:09.268835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.138 "name": "Existed_Raid", 00:09:43.138 "uuid": "e4beecf9-71e7-44ec-8b34-af87f355c933", 00:09:43.138 "strip_size_kb": 0, 00:09:43.138 "state": "configuring", 00:09:43.138 "raid_level": "raid1", 00:09:43.138 "superblock": true, 00:09:43.138 "num_base_bdevs": 3, 00:09:43.138 "num_base_bdevs_discovered": 1, 00:09:43.138 "num_base_bdevs_operational": 3, 00:09:43.138 "base_bdevs_list": [ 00:09:43.138 { 00:09:43.138 "name": null, 00:09:43.138 "uuid": "8438337c-8fbe-4679-92c5-7afd585f2c4f", 00:09:43.138 "is_configured": false, 00:09:43.138 "data_offset": 0, 00:09:43.138 "data_size": 63488 00:09:43.138 }, 00:09:43.138 { 00:09:43.138 "name": null, 00:09:43.138 "uuid": "3782bd89-8e59-43c5-9a04-6932d17330ff", 00:09:43.138 "is_configured": false, 00:09:43.138 "data_offset": 0, 00:09:43.138 "data_size": 63488 00:09:43.138 }, 00:09:43.138 { 00:09:43.138 "name": "BaseBdev3", 00:09:43.138 "uuid": "344948f6-f493-4a79-9999-26739fe3dcd1", 00:09:43.138 "is_configured": true, 00:09:43.138 "data_offset": 2048, 00:09:43.138 "data_size": 63488 00:09:43.138 } 00:09:43.138 ] 00:09:43.138 }' 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.138 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.707 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.708 [2024-11-27 11:48:09.894216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.708 "name": "Existed_Raid", 00:09:43.708 "uuid": "e4beecf9-71e7-44ec-8b34-af87f355c933", 00:09:43.708 "strip_size_kb": 0, 00:09:43.708 "state": "configuring", 00:09:43.708 "raid_level": "raid1", 00:09:43.708 "superblock": true, 00:09:43.708 "num_base_bdevs": 3, 00:09:43.708 "num_base_bdevs_discovered": 2, 00:09:43.708 "num_base_bdevs_operational": 3, 00:09:43.708 "base_bdevs_list": [ 00:09:43.708 { 00:09:43.708 "name": null, 00:09:43.708 "uuid": "8438337c-8fbe-4679-92c5-7afd585f2c4f", 00:09:43.708 "is_configured": false, 00:09:43.708 "data_offset": 0, 00:09:43.708 "data_size": 63488 00:09:43.708 }, 00:09:43.708 { 00:09:43.708 "name": "BaseBdev2", 00:09:43.708 "uuid": "3782bd89-8e59-43c5-9a04-6932d17330ff", 00:09:43.708 "is_configured": true, 00:09:43.708 "data_offset": 2048, 00:09:43.708 "data_size": 63488 00:09:43.708 }, 00:09:43.708 { 00:09:43.708 "name": "BaseBdev3", 00:09:43.708 "uuid": "344948f6-f493-4a79-9999-26739fe3dcd1", 00:09:43.708 "is_configured": true, 00:09:43.708 "data_offset": 2048, 00:09:43.708 "data_size": 63488 00:09:43.708 } 00:09:43.708 ] 00:09:43.708 }' 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.708 11:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8438337c-8fbe-4679-92c5-7afd585f2c4f 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.278 [2024-11-27 11:48:10.523134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:44.278 [2024-11-27 11:48:10.523426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:44.278 [2024-11-27 11:48:10.523456] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:44.278 [2024-11-27 11:48:10.523772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:44.278 [2024-11-27 11:48:10.523968] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:44.278 [2024-11-27 11:48:10.523983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:44.278 NewBaseBdev 00:09:44.278 [2024-11-27 11:48:10.524141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.278 [ 00:09:44.278 { 00:09:44.278 "name": "NewBaseBdev", 00:09:44.278 "aliases": [ 00:09:44.278 "8438337c-8fbe-4679-92c5-7afd585f2c4f" 00:09:44.278 ], 00:09:44.278 "product_name": "Malloc disk", 00:09:44.278 "block_size": 512, 00:09:44.278 "num_blocks": 65536, 00:09:44.278 "uuid": "8438337c-8fbe-4679-92c5-7afd585f2c4f", 00:09:44.278 "assigned_rate_limits": { 00:09:44.278 "rw_ios_per_sec": 0, 00:09:44.278 "rw_mbytes_per_sec": 0, 00:09:44.278 "r_mbytes_per_sec": 0, 00:09:44.278 "w_mbytes_per_sec": 0 00:09:44.278 }, 00:09:44.278 "claimed": true, 00:09:44.278 "claim_type": "exclusive_write", 00:09:44.278 "zoned": false, 00:09:44.278 "supported_io_types": { 00:09:44.278 "read": true, 00:09:44.278 "write": true, 00:09:44.278 "unmap": true, 00:09:44.278 "flush": true, 00:09:44.278 "reset": true, 00:09:44.278 "nvme_admin": false, 00:09:44.278 "nvme_io": false, 00:09:44.278 "nvme_io_md": false, 00:09:44.278 "write_zeroes": true, 00:09:44.278 "zcopy": true, 00:09:44.278 "get_zone_info": false, 00:09:44.278 "zone_management": false, 00:09:44.278 "zone_append": false, 00:09:44.278 "compare": false, 00:09:44.278 "compare_and_write": false, 00:09:44.278 "abort": true, 00:09:44.278 "seek_hole": false, 00:09:44.278 "seek_data": false, 00:09:44.278 "copy": true, 00:09:44.278 "nvme_iov_md": false 00:09:44.278 }, 00:09:44.278 "memory_domains": [ 00:09:44.278 { 00:09:44.278 "dma_device_id": "system", 00:09:44.278 "dma_device_type": 1 00:09:44.278 }, 00:09:44.278 { 00:09:44.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.278 "dma_device_type": 2 00:09:44.278 } 00:09:44.278 ], 00:09:44.278 "driver_specific": {} 00:09:44.278 } 00:09:44.278 ] 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.278 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.279 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.279 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.279 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.279 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.279 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.279 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.279 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.279 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.279 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.279 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.279 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.279 "name": "Existed_Raid", 00:09:44.279 "uuid": "e4beecf9-71e7-44ec-8b34-af87f355c933", 00:09:44.279 "strip_size_kb": 0, 00:09:44.279 "state": "online", 00:09:44.279 "raid_level": "raid1", 00:09:44.279 "superblock": true, 00:09:44.279 "num_base_bdevs": 3, 00:09:44.279 "num_base_bdevs_discovered": 3, 00:09:44.279 "num_base_bdevs_operational": 3, 00:09:44.279 "base_bdevs_list": [ 00:09:44.279 { 00:09:44.279 "name": "NewBaseBdev", 00:09:44.279 "uuid": "8438337c-8fbe-4679-92c5-7afd585f2c4f", 00:09:44.279 "is_configured": true, 00:09:44.279 "data_offset": 2048, 00:09:44.279 "data_size": 63488 00:09:44.279 }, 00:09:44.279 { 00:09:44.279 "name": "BaseBdev2", 00:09:44.279 "uuid": "3782bd89-8e59-43c5-9a04-6932d17330ff", 00:09:44.279 "is_configured": true, 00:09:44.279 "data_offset": 2048, 00:09:44.279 "data_size": 63488 00:09:44.279 }, 00:09:44.279 { 00:09:44.279 "name": "BaseBdev3", 00:09:44.279 "uuid": "344948f6-f493-4a79-9999-26739fe3dcd1", 00:09:44.279 "is_configured": true, 00:09:44.279 "data_offset": 2048, 00:09:44.279 "data_size": 63488 00:09:44.279 } 00:09:44.279 ] 00:09:44.279 }' 00:09:44.279 11:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.279 11:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.847 [2024-11-27 11:48:11.082539] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.847 "name": "Existed_Raid", 00:09:44.847 "aliases": [ 00:09:44.847 "e4beecf9-71e7-44ec-8b34-af87f355c933" 00:09:44.847 ], 00:09:44.847 "product_name": "Raid Volume", 00:09:44.847 "block_size": 512, 00:09:44.847 "num_blocks": 63488, 00:09:44.847 "uuid": "e4beecf9-71e7-44ec-8b34-af87f355c933", 00:09:44.847 "assigned_rate_limits": { 00:09:44.847 "rw_ios_per_sec": 0, 00:09:44.847 "rw_mbytes_per_sec": 0, 00:09:44.847 "r_mbytes_per_sec": 0, 00:09:44.847 "w_mbytes_per_sec": 0 00:09:44.847 }, 00:09:44.847 "claimed": false, 00:09:44.847 "zoned": false, 00:09:44.847 "supported_io_types": { 00:09:44.847 "read": true, 00:09:44.847 "write": true, 00:09:44.847 "unmap": false, 00:09:44.847 "flush": false, 00:09:44.847 "reset": true, 00:09:44.847 "nvme_admin": false, 00:09:44.847 "nvme_io": false, 00:09:44.847 "nvme_io_md": false, 00:09:44.847 "write_zeroes": true, 00:09:44.847 "zcopy": false, 00:09:44.847 "get_zone_info": false, 00:09:44.847 "zone_management": false, 00:09:44.847 "zone_append": false, 00:09:44.847 "compare": false, 00:09:44.847 "compare_and_write": false, 00:09:44.847 "abort": false, 00:09:44.847 "seek_hole": false, 00:09:44.847 "seek_data": false, 00:09:44.847 "copy": false, 00:09:44.847 "nvme_iov_md": false 00:09:44.847 }, 00:09:44.847 "memory_domains": [ 00:09:44.847 { 00:09:44.847 "dma_device_id": "system", 00:09:44.847 "dma_device_type": 1 00:09:44.847 }, 00:09:44.847 { 00:09:44.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.847 "dma_device_type": 2 00:09:44.847 }, 00:09:44.847 { 00:09:44.847 "dma_device_id": "system", 00:09:44.847 "dma_device_type": 1 00:09:44.847 }, 00:09:44.847 { 00:09:44.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.847 "dma_device_type": 2 00:09:44.847 }, 00:09:44.847 { 00:09:44.847 "dma_device_id": "system", 00:09:44.847 "dma_device_type": 1 00:09:44.847 }, 00:09:44.847 { 00:09:44.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.847 "dma_device_type": 2 00:09:44.847 } 00:09:44.847 ], 00:09:44.847 "driver_specific": { 00:09:44.847 "raid": { 00:09:44.847 "uuid": "e4beecf9-71e7-44ec-8b34-af87f355c933", 00:09:44.847 "strip_size_kb": 0, 00:09:44.847 "state": "online", 00:09:44.847 "raid_level": "raid1", 00:09:44.847 "superblock": true, 00:09:44.847 "num_base_bdevs": 3, 00:09:44.847 "num_base_bdevs_discovered": 3, 00:09:44.847 "num_base_bdevs_operational": 3, 00:09:44.847 "base_bdevs_list": [ 00:09:44.847 { 00:09:44.847 "name": "NewBaseBdev", 00:09:44.847 "uuid": "8438337c-8fbe-4679-92c5-7afd585f2c4f", 00:09:44.847 "is_configured": true, 00:09:44.847 "data_offset": 2048, 00:09:44.847 "data_size": 63488 00:09:44.847 }, 00:09:44.847 { 00:09:44.847 "name": "BaseBdev2", 00:09:44.847 "uuid": "3782bd89-8e59-43c5-9a04-6932d17330ff", 00:09:44.847 "is_configured": true, 00:09:44.847 "data_offset": 2048, 00:09:44.847 "data_size": 63488 00:09:44.847 }, 00:09:44.847 { 00:09:44.847 "name": "BaseBdev3", 00:09:44.847 "uuid": "344948f6-f493-4a79-9999-26739fe3dcd1", 00:09:44.847 "is_configured": true, 00:09:44.847 "data_offset": 2048, 00:09:44.847 "data_size": 63488 00:09:44.847 } 00:09:44.847 ] 00:09:44.847 } 00:09:44.847 } 00:09:44.847 }' 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:44.847 BaseBdev2 00:09:44.847 BaseBdev3' 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.847 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.105 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.105 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.105 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.105 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.105 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:45.105 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.105 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.105 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.106 [2024-11-27 11:48:11.365764] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.106 [2024-11-27 11:48:11.365800] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.106 [2024-11-27 11:48:11.365908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.106 [2024-11-27 11:48:11.366209] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.106 [2024-11-27 11:48:11.366226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68006 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68006 ']' 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68006 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68006 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68006' 00:09:45.106 killing process with pid 68006 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68006 00:09:45.106 [2024-11-27 11:48:11.413759] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:45.106 11:48:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68006 00:09:45.365 [2024-11-27 11:48:11.726454] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.742 11:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:46.742 ************************************ 00:09:46.742 END TEST raid_state_function_test_sb 00:09:46.742 ************************************ 00:09:46.742 00:09:46.742 real 0m11.406s 00:09:46.742 user 0m18.077s 00:09:46.742 sys 0m2.008s 00:09:46.742 11:48:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.742 11:48:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.742 11:48:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:46.742 11:48:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:46.742 11:48:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.742 11:48:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.742 ************************************ 00:09:46.742 START TEST raid_superblock_test 00:09:46.742 ************************************ 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68637 00:09:46.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68637 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68637 ']' 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.742 11:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.001 [2024-11-27 11:48:13.158599] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:09:47.001 [2024-11-27 11:48:13.158851] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68637 ] 00:09:47.002 [2024-11-27 11:48:13.338789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.260 [2024-11-27 11:48:13.466108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.519 [2024-11-27 11:48:13.691244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.519 [2024-11-27 11:48:13.691396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.779 malloc1 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.779 [2024-11-27 11:48:14.127149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:47.779 [2024-11-27 11:48:14.127318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.779 [2024-11-27 11:48:14.127370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:47.779 [2024-11-27 11:48:14.127409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.779 [2024-11-27 11:48:14.130048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.779 [2024-11-27 11:48:14.130156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:47.779 pt1 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.779 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.051 malloc2 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.051 [2024-11-27 11:48:14.193623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:48.051 [2024-11-27 11:48:14.193706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.051 [2024-11-27 11:48:14.193739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:48.051 [2024-11-27 11:48:14.193749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.051 [2024-11-27 11:48:14.196349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.051 [2024-11-27 11:48:14.196398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:48.051 pt2 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.051 malloc3 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.051 [2024-11-27 11:48:14.270688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:48.051 [2024-11-27 11:48:14.270848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.051 [2024-11-27 11:48:14.270902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:48.051 [2024-11-27 11:48:14.270971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.051 [2024-11-27 11:48:14.273592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.051 [2024-11-27 11:48:14.273722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:48.051 pt3 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.051 [2024-11-27 11:48:14.286793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:48.051 [2024-11-27 11:48:14.289163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.051 [2024-11-27 11:48:14.289343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:48.051 [2024-11-27 11:48:14.289609] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:48.051 [2024-11-27 11:48:14.289683] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:48.051 [2024-11-27 11:48:14.290086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:48.051 [2024-11-27 11:48:14.290374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:48.051 [2024-11-27 11:48:14.290429] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:48.051 [2024-11-27 11:48:14.290741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.051 "name": "raid_bdev1", 00:09:48.051 "uuid": "d94e519a-af80-45b6-b89d-55820f634ecf", 00:09:48.051 "strip_size_kb": 0, 00:09:48.051 "state": "online", 00:09:48.051 "raid_level": "raid1", 00:09:48.051 "superblock": true, 00:09:48.051 "num_base_bdevs": 3, 00:09:48.051 "num_base_bdevs_discovered": 3, 00:09:48.051 "num_base_bdevs_operational": 3, 00:09:48.051 "base_bdevs_list": [ 00:09:48.051 { 00:09:48.051 "name": "pt1", 00:09:48.051 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.051 "is_configured": true, 00:09:48.051 "data_offset": 2048, 00:09:48.051 "data_size": 63488 00:09:48.051 }, 00:09:48.051 { 00:09:48.051 "name": "pt2", 00:09:48.051 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.051 "is_configured": true, 00:09:48.051 "data_offset": 2048, 00:09:48.051 "data_size": 63488 00:09:48.051 }, 00:09:48.051 { 00:09:48.051 "name": "pt3", 00:09:48.051 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.051 "is_configured": true, 00:09:48.051 "data_offset": 2048, 00:09:48.051 "data_size": 63488 00:09:48.051 } 00:09:48.051 ] 00:09:48.051 }' 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.051 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.635 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:48.635 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:48.635 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.635 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.635 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.635 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.635 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.635 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.635 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.635 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.635 [2024-11-27 11:48:14.766289] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.635 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.635 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:48.635 "name": "raid_bdev1", 00:09:48.635 "aliases": [ 00:09:48.635 "d94e519a-af80-45b6-b89d-55820f634ecf" 00:09:48.635 ], 00:09:48.635 "product_name": "Raid Volume", 00:09:48.635 "block_size": 512, 00:09:48.635 "num_blocks": 63488, 00:09:48.635 "uuid": "d94e519a-af80-45b6-b89d-55820f634ecf", 00:09:48.635 "assigned_rate_limits": { 00:09:48.635 "rw_ios_per_sec": 0, 00:09:48.635 "rw_mbytes_per_sec": 0, 00:09:48.635 "r_mbytes_per_sec": 0, 00:09:48.635 "w_mbytes_per_sec": 0 00:09:48.635 }, 00:09:48.635 "claimed": false, 00:09:48.635 "zoned": false, 00:09:48.635 "supported_io_types": { 00:09:48.635 "read": true, 00:09:48.635 "write": true, 00:09:48.635 "unmap": false, 00:09:48.635 "flush": false, 00:09:48.635 "reset": true, 00:09:48.635 "nvme_admin": false, 00:09:48.635 "nvme_io": false, 00:09:48.635 "nvme_io_md": false, 00:09:48.635 "write_zeroes": true, 00:09:48.635 "zcopy": false, 00:09:48.635 "get_zone_info": false, 00:09:48.635 "zone_management": false, 00:09:48.635 "zone_append": false, 00:09:48.635 "compare": false, 00:09:48.635 "compare_and_write": false, 00:09:48.635 "abort": false, 00:09:48.635 "seek_hole": false, 00:09:48.635 "seek_data": false, 00:09:48.635 "copy": false, 00:09:48.635 "nvme_iov_md": false 00:09:48.635 }, 00:09:48.635 "memory_domains": [ 00:09:48.635 { 00:09:48.635 "dma_device_id": "system", 00:09:48.635 "dma_device_type": 1 00:09:48.635 }, 00:09:48.635 { 00:09:48.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.635 "dma_device_type": 2 00:09:48.635 }, 00:09:48.635 { 00:09:48.635 "dma_device_id": "system", 00:09:48.635 "dma_device_type": 1 00:09:48.635 }, 00:09:48.635 { 00:09:48.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.635 "dma_device_type": 2 00:09:48.635 }, 00:09:48.635 { 00:09:48.635 "dma_device_id": "system", 00:09:48.635 "dma_device_type": 1 00:09:48.635 }, 00:09:48.635 { 00:09:48.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.635 "dma_device_type": 2 00:09:48.635 } 00:09:48.635 ], 00:09:48.635 "driver_specific": { 00:09:48.635 "raid": { 00:09:48.635 "uuid": "d94e519a-af80-45b6-b89d-55820f634ecf", 00:09:48.635 "strip_size_kb": 0, 00:09:48.635 "state": "online", 00:09:48.635 "raid_level": "raid1", 00:09:48.635 "superblock": true, 00:09:48.635 "num_base_bdevs": 3, 00:09:48.635 "num_base_bdevs_discovered": 3, 00:09:48.635 "num_base_bdevs_operational": 3, 00:09:48.635 "base_bdevs_list": [ 00:09:48.635 { 00:09:48.635 "name": "pt1", 00:09:48.635 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.635 "is_configured": true, 00:09:48.635 "data_offset": 2048, 00:09:48.635 "data_size": 63488 00:09:48.635 }, 00:09:48.635 { 00:09:48.635 "name": "pt2", 00:09:48.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.635 "is_configured": true, 00:09:48.635 "data_offset": 2048, 00:09:48.635 "data_size": 63488 00:09:48.635 }, 00:09:48.635 { 00:09:48.635 "name": "pt3", 00:09:48.635 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.635 "is_configured": true, 00:09:48.635 "data_offset": 2048, 00:09:48.635 "data_size": 63488 00:09:48.635 } 00:09:48.635 ] 00:09:48.635 } 00:09:48.635 } 00:09:48.635 }' 00:09:48.635 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:48.636 pt2 00:09:48.636 pt3' 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.636 11:48:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.636 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.897 [2024-11-27 11:48:15.049783] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d94e519a-af80-45b6-b89d-55820f634ecf 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d94e519a-af80-45b6-b89d-55820f634ecf ']' 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.897 [2024-11-27 11:48:15.097380] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.897 [2024-11-27 11:48:15.097412] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.897 [2024-11-27 11:48:15.097505] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.897 [2024-11-27 11:48:15.097584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.897 [2024-11-27 11:48:15.097595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.897 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.897 [2024-11-27 11:48:15.245178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:48.897 [2024-11-27 11:48:15.247109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:48.897 [2024-11-27 11:48:15.247174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:48.897 [2024-11-27 11:48:15.247232] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:48.897 [2024-11-27 11:48:15.247292] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:48.897 [2024-11-27 11:48:15.247313] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:48.897 [2024-11-27 11:48:15.247331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.897 [2024-11-27 11:48:15.247342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:48.897 request: 00:09:48.897 { 00:09:48.897 "name": "raid_bdev1", 00:09:48.897 "raid_level": "raid1", 00:09:48.897 "base_bdevs": [ 00:09:48.897 "malloc1", 00:09:48.897 "malloc2", 00:09:48.897 "malloc3" 00:09:48.897 ], 00:09:48.897 "superblock": false, 00:09:48.897 "method": "bdev_raid_create", 00:09:48.897 "req_id": 1 00:09:48.897 } 00:09:48.897 Got JSON-RPC error response 00:09:48.897 response: 00:09:48.897 { 00:09:48.897 "code": -17, 00:09:48.897 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:48.897 } 00:09:48.898 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:48.898 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:48.898 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:48.898 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:48.898 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:48.898 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.898 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:48.898 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.898 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.898 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.158 [2024-11-27 11:48:15.309020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:49.158 [2024-11-27 11:48:15.309122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.158 [2024-11-27 11:48:15.309163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:49.158 [2024-11-27 11:48:15.309192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.158 [2024-11-27 11:48:15.311378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.158 [2024-11-27 11:48:15.311459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:49.158 [2024-11-27 11:48:15.311586] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:49.158 [2024-11-27 11:48:15.311670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:49.158 pt1 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.158 "name": "raid_bdev1", 00:09:49.158 "uuid": "d94e519a-af80-45b6-b89d-55820f634ecf", 00:09:49.158 "strip_size_kb": 0, 00:09:49.158 "state": "configuring", 00:09:49.158 "raid_level": "raid1", 00:09:49.158 "superblock": true, 00:09:49.158 "num_base_bdevs": 3, 00:09:49.158 "num_base_bdevs_discovered": 1, 00:09:49.158 "num_base_bdevs_operational": 3, 00:09:49.158 "base_bdevs_list": [ 00:09:49.158 { 00:09:49.158 "name": "pt1", 00:09:49.158 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.158 "is_configured": true, 00:09:49.158 "data_offset": 2048, 00:09:49.158 "data_size": 63488 00:09:49.158 }, 00:09:49.158 { 00:09:49.158 "name": null, 00:09:49.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.158 "is_configured": false, 00:09:49.158 "data_offset": 2048, 00:09:49.158 "data_size": 63488 00:09:49.158 }, 00:09:49.158 { 00:09:49.158 "name": null, 00:09:49.158 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.158 "is_configured": false, 00:09:49.158 "data_offset": 2048, 00:09:49.158 "data_size": 63488 00:09:49.158 } 00:09:49.158 ] 00:09:49.158 }' 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.158 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.418 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:49.418 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:49.418 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.418 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.418 [2024-11-27 11:48:15.792197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:49.418 [2024-11-27 11:48:15.792269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.418 [2024-11-27 11:48:15.792294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:49.418 [2024-11-27 11:48:15.792302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.418 [2024-11-27 11:48:15.792772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.418 [2024-11-27 11:48:15.792789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:49.418 [2024-11-27 11:48:15.792892] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:49.418 [2024-11-27 11:48:15.792917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:49.418 pt2 00:09:49.418 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.418 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:49.418 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.418 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.678 [2024-11-27 11:48:15.804172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.678 "name": "raid_bdev1", 00:09:49.678 "uuid": "d94e519a-af80-45b6-b89d-55820f634ecf", 00:09:49.678 "strip_size_kb": 0, 00:09:49.678 "state": "configuring", 00:09:49.678 "raid_level": "raid1", 00:09:49.678 "superblock": true, 00:09:49.678 "num_base_bdevs": 3, 00:09:49.678 "num_base_bdevs_discovered": 1, 00:09:49.678 "num_base_bdevs_operational": 3, 00:09:49.678 "base_bdevs_list": [ 00:09:49.678 { 00:09:49.678 "name": "pt1", 00:09:49.678 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.678 "is_configured": true, 00:09:49.678 "data_offset": 2048, 00:09:49.678 "data_size": 63488 00:09:49.678 }, 00:09:49.678 { 00:09:49.678 "name": null, 00:09:49.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.678 "is_configured": false, 00:09:49.678 "data_offset": 0, 00:09:49.678 "data_size": 63488 00:09:49.678 }, 00:09:49.678 { 00:09:49.678 "name": null, 00:09:49.678 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.678 "is_configured": false, 00:09:49.678 "data_offset": 2048, 00:09:49.678 "data_size": 63488 00:09:49.678 } 00:09:49.678 ] 00:09:49.678 }' 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.678 11:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.939 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:49.939 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.939 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:49.939 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.939 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.939 [2024-11-27 11:48:16.255387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:49.939 [2024-11-27 11:48:16.255573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.939 [2024-11-27 11:48:16.255619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:49.939 [2024-11-27 11:48:16.255664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.939 [2024-11-27 11:48:16.256230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.939 [2024-11-27 11:48:16.256299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:49.939 [2024-11-27 11:48:16.256428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:49.939 [2024-11-27 11:48:16.256499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:49.939 pt2 00:09:49.939 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.939 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:49.939 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.939 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:49.939 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.939 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.939 [2024-11-27 11:48:16.267344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:49.939 [2024-11-27 11:48:16.267428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.940 [2024-11-27 11:48:16.267465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:49.940 [2024-11-27 11:48:16.267528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.940 [2024-11-27 11:48:16.268029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.940 [2024-11-27 11:48:16.268099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:49.940 [2024-11-27 11:48:16.268206] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:49.940 [2024-11-27 11:48:16.268264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:49.940 [2024-11-27 11:48:16.268464] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:49.940 [2024-11-27 11:48:16.268514] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:49.940 [2024-11-27 11:48:16.268816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:49.940 [2024-11-27 11:48:16.269006] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:49.940 [2024-11-27 11:48:16.269017] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:49.940 [2024-11-27 11:48:16.269175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.940 pt3 00:09:49.940 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.940 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:49.940 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.940 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:49.940 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.940 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.940 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.940 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.940 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.940 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.940 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.940 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.940 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.940 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.940 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.940 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.940 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.940 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.199 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.199 "name": "raid_bdev1", 00:09:50.199 "uuid": "d94e519a-af80-45b6-b89d-55820f634ecf", 00:09:50.199 "strip_size_kb": 0, 00:09:50.199 "state": "online", 00:09:50.199 "raid_level": "raid1", 00:09:50.199 "superblock": true, 00:09:50.199 "num_base_bdevs": 3, 00:09:50.199 "num_base_bdevs_discovered": 3, 00:09:50.199 "num_base_bdevs_operational": 3, 00:09:50.199 "base_bdevs_list": [ 00:09:50.199 { 00:09:50.199 "name": "pt1", 00:09:50.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.199 "is_configured": true, 00:09:50.199 "data_offset": 2048, 00:09:50.199 "data_size": 63488 00:09:50.199 }, 00:09:50.199 { 00:09:50.199 "name": "pt2", 00:09:50.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.199 "is_configured": true, 00:09:50.199 "data_offset": 2048, 00:09:50.199 "data_size": 63488 00:09:50.199 }, 00:09:50.199 { 00:09:50.199 "name": "pt3", 00:09:50.200 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.200 "is_configured": true, 00:09:50.200 "data_offset": 2048, 00:09:50.200 "data_size": 63488 00:09:50.200 } 00:09:50.200 ] 00:09:50.200 }' 00:09:50.200 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.200 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.459 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:50.459 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:50.459 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:50.459 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:50.459 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:50.459 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:50.459 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:50.459 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:50.459 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.459 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.459 [2024-11-27 11:48:16.758846] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.459 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.459 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:50.459 "name": "raid_bdev1", 00:09:50.459 "aliases": [ 00:09:50.459 "d94e519a-af80-45b6-b89d-55820f634ecf" 00:09:50.459 ], 00:09:50.459 "product_name": "Raid Volume", 00:09:50.459 "block_size": 512, 00:09:50.459 "num_blocks": 63488, 00:09:50.459 "uuid": "d94e519a-af80-45b6-b89d-55820f634ecf", 00:09:50.459 "assigned_rate_limits": { 00:09:50.459 "rw_ios_per_sec": 0, 00:09:50.459 "rw_mbytes_per_sec": 0, 00:09:50.459 "r_mbytes_per_sec": 0, 00:09:50.459 "w_mbytes_per_sec": 0 00:09:50.459 }, 00:09:50.459 "claimed": false, 00:09:50.459 "zoned": false, 00:09:50.459 "supported_io_types": { 00:09:50.459 "read": true, 00:09:50.459 "write": true, 00:09:50.459 "unmap": false, 00:09:50.459 "flush": false, 00:09:50.459 "reset": true, 00:09:50.459 "nvme_admin": false, 00:09:50.459 "nvme_io": false, 00:09:50.459 "nvme_io_md": false, 00:09:50.459 "write_zeroes": true, 00:09:50.459 "zcopy": false, 00:09:50.459 "get_zone_info": false, 00:09:50.459 "zone_management": false, 00:09:50.459 "zone_append": false, 00:09:50.459 "compare": false, 00:09:50.459 "compare_and_write": false, 00:09:50.459 "abort": false, 00:09:50.459 "seek_hole": false, 00:09:50.459 "seek_data": false, 00:09:50.459 "copy": false, 00:09:50.459 "nvme_iov_md": false 00:09:50.459 }, 00:09:50.459 "memory_domains": [ 00:09:50.459 { 00:09:50.459 "dma_device_id": "system", 00:09:50.459 "dma_device_type": 1 00:09:50.459 }, 00:09:50.459 { 00:09:50.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.459 "dma_device_type": 2 00:09:50.459 }, 00:09:50.459 { 00:09:50.459 "dma_device_id": "system", 00:09:50.459 "dma_device_type": 1 00:09:50.459 }, 00:09:50.459 { 00:09:50.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.459 "dma_device_type": 2 00:09:50.459 }, 00:09:50.459 { 00:09:50.459 "dma_device_id": "system", 00:09:50.459 "dma_device_type": 1 00:09:50.459 }, 00:09:50.459 { 00:09:50.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.459 "dma_device_type": 2 00:09:50.459 } 00:09:50.459 ], 00:09:50.459 "driver_specific": { 00:09:50.459 "raid": { 00:09:50.459 "uuid": "d94e519a-af80-45b6-b89d-55820f634ecf", 00:09:50.459 "strip_size_kb": 0, 00:09:50.459 "state": "online", 00:09:50.459 "raid_level": "raid1", 00:09:50.459 "superblock": true, 00:09:50.459 "num_base_bdevs": 3, 00:09:50.459 "num_base_bdevs_discovered": 3, 00:09:50.459 "num_base_bdevs_operational": 3, 00:09:50.459 "base_bdevs_list": [ 00:09:50.459 { 00:09:50.460 "name": "pt1", 00:09:50.460 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.460 "is_configured": true, 00:09:50.460 "data_offset": 2048, 00:09:50.460 "data_size": 63488 00:09:50.460 }, 00:09:50.460 { 00:09:50.460 "name": "pt2", 00:09:50.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.460 "is_configured": true, 00:09:50.460 "data_offset": 2048, 00:09:50.460 "data_size": 63488 00:09:50.460 }, 00:09:50.460 { 00:09:50.460 "name": "pt3", 00:09:50.460 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.460 "is_configured": true, 00:09:50.460 "data_offset": 2048, 00:09:50.460 "data_size": 63488 00:09:50.460 } 00:09:50.460 ] 00:09:50.460 } 00:09:50.460 } 00:09:50.460 }' 00:09:50.460 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:50.719 pt2 00:09:50.719 pt3' 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.719 11:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.719 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.719 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.719 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.719 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:50.719 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:50.719 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.719 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.719 [2024-11-27 11:48:17.062370] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.719 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d94e519a-af80-45b6-b89d-55820f634ecf '!=' d94e519a-af80-45b6-b89d-55820f634ecf ']' 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.979 [2024-11-27 11:48:17.110072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.979 "name": "raid_bdev1", 00:09:50.979 "uuid": "d94e519a-af80-45b6-b89d-55820f634ecf", 00:09:50.979 "strip_size_kb": 0, 00:09:50.979 "state": "online", 00:09:50.979 "raid_level": "raid1", 00:09:50.979 "superblock": true, 00:09:50.979 "num_base_bdevs": 3, 00:09:50.979 "num_base_bdevs_discovered": 2, 00:09:50.979 "num_base_bdevs_operational": 2, 00:09:50.979 "base_bdevs_list": [ 00:09:50.979 { 00:09:50.979 "name": null, 00:09:50.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.979 "is_configured": false, 00:09:50.979 "data_offset": 0, 00:09:50.979 "data_size": 63488 00:09:50.979 }, 00:09:50.979 { 00:09:50.979 "name": "pt2", 00:09:50.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.979 "is_configured": true, 00:09:50.979 "data_offset": 2048, 00:09:50.979 "data_size": 63488 00:09:50.979 }, 00:09:50.979 { 00:09:50.979 "name": "pt3", 00:09:50.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.979 "is_configured": true, 00:09:50.979 "data_offset": 2048, 00:09:50.979 "data_size": 63488 00:09:50.979 } 00:09:50.979 ] 00:09:50.979 }' 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.979 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.239 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.239 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.239 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.239 [2024-11-27 11:48:17.565183] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.239 [2024-11-27 11:48:17.565270] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.239 [2024-11-27 11:48:17.565380] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.239 [2024-11-27 11:48:17.565456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.239 [2024-11-27 11:48:17.565510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:51.239 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.239 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.239 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.239 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.239 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:51.239 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.239 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:51.239 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:51.239 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:51.239 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:51.239 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:51.239 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.239 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.499 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.499 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:51.499 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:51.499 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:51.499 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.499 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.499 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.499 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:51.499 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:51.499 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:51.499 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.500 [2024-11-27 11:48:17.652985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:51.500 [2024-11-27 11:48:17.653076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.500 [2024-11-27 11:48:17.653097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:51.500 [2024-11-27 11:48:17.653107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.500 [2024-11-27 11:48:17.655268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.500 [2024-11-27 11:48:17.655358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:51.500 [2024-11-27 11:48:17.655495] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:51.500 [2024-11-27 11:48:17.655582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:51.500 pt2 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.500 "name": "raid_bdev1", 00:09:51.500 "uuid": "d94e519a-af80-45b6-b89d-55820f634ecf", 00:09:51.500 "strip_size_kb": 0, 00:09:51.500 "state": "configuring", 00:09:51.500 "raid_level": "raid1", 00:09:51.500 "superblock": true, 00:09:51.500 "num_base_bdevs": 3, 00:09:51.500 "num_base_bdevs_discovered": 1, 00:09:51.500 "num_base_bdevs_operational": 2, 00:09:51.500 "base_bdevs_list": [ 00:09:51.500 { 00:09:51.500 "name": null, 00:09:51.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.500 "is_configured": false, 00:09:51.500 "data_offset": 2048, 00:09:51.500 "data_size": 63488 00:09:51.500 }, 00:09:51.500 { 00:09:51.500 "name": "pt2", 00:09:51.500 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.500 "is_configured": true, 00:09:51.500 "data_offset": 2048, 00:09:51.500 "data_size": 63488 00:09:51.500 }, 00:09:51.500 { 00:09:51.500 "name": null, 00:09:51.500 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.500 "is_configured": false, 00:09:51.500 "data_offset": 2048, 00:09:51.500 "data_size": 63488 00:09:51.500 } 00:09:51.500 ] 00:09:51.500 }' 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.500 11:48:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.760 [2024-11-27 11:48:18.088294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:51.760 [2024-11-27 11:48:18.088370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.760 [2024-11-27 11:48:18.088394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:51.760 [2024-11-27 11:48:18.088406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.760 [2024-11-27 11:48:18.088924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.760 [2024-11-27 11:48:18.088947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:51.760 [2024-11-27 11:48:18.089050] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:51.760 [2024-11-27 11:48:18.089088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:51.760 [2024-11-27 11:48:18.089216] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:51.760 [2024-11-27 11:48:18.089242] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:51.760 [2024-11-27 11:48:18.089521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:51.760 [2024-11-27 11:48:18.089699] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:51.760 [2024-11-27 11:48:18.089717] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:51.760 [2024-11-27 11:48:18.089893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.760 pt3 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.760 "name": "raid_bdev1", 00:09:51.760 "uuid": "d94e519a-af80-45b6-b89d-55820f634ecf", 00:09:51.760 "strip_size_kb": 0, 00:09:51.760 "state": "online", 00:09:51.760 "raid_level": "raid1", 00:09:51.760 "superblock": true, 00:09:51.760 "num_base_bdevs": 3, 00:09:51.760 "num_base_bdevs_discovered": 2, 00:09:51.760 "num_base_bdevs_operational": 2, 00:09:51.760 "base_bdevs_list": [ 00:09:51.760 { 00:09:51.760 "name": null, 00:09:51.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.760 "is_configured": false, 00:09:51.760 "data_offset": 2048, 00:09:51.760 "data_size": 63488 00:09:51.760 }, 00:09:51.760 { 00:09:51.760 "name": "pt2", 00:09:51.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.760 "is_configured": true, 00:09:51.760 "data_offset": 2048, 00:09:51.760 "data_size": 63488 00:09:51.760 }, 00:09:51.760 { 00:09:51.760 "name": "pt3", 00:09:51.760 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.760 "is_configured": true, 00:09:51.760 "data_offset": 2048, 00:09:51.760 "data_size": 63488 00:09:51.760 } 00:09:51.760 ] 00:09:51.760 }' 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.760 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.334 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:52.334 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.335 [2024-11-27 11:48:18.475639] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:52.335 [2024-11-27 11:48:18.475732] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.335 [2024-11-27 11:48:18.475868] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.335 [2024-11-27 11:48:18.475975] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.335 [2024-11-27 11:48:18.476027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.335 [2024-11-27 11:48:18.551585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:52.335 [2024-11-27 11:48:18.551658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.335 [2024-11-27 11:48:18.551678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:52.335 [2024-11-27 11:48:18.551687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.335 [2024-11-27 11:48:18.553966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.335 [2024-11-27 11:48:18.554002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:52.335 [2024-11-27 11:48:18.554096] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:52.335 [2024-11-27 11:48:18.554141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:52.335 [2024-11-27 11:48:18.554273] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:52.335 [2024-11-27 11:48:18.554283] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:52.335 [2024-11-27 11:48:18.554300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:52.335 [2024-11-27 11:48:18.554356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:52.335 pt1 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.335 "name": "raid_bdev1", 00:09:52.335 "uuid": "d94e519a-af80-45b6-b89d-55820f634ecf", 00:09:52.335 "strip_size_kb": 0, 00:09:52.335 "state": "configuring", 00:09:52.335 "raid_level": "raid1", 00:09:52.335 "superblock": true, 00:09:52.335 "num_base_bdevs": 3, 00:09:52.335 "num_base_bdevs_discovered": 1, 00:09:52.335 "num_base_bdevs_operational": 2, 00:09:52.335 "base_bdevs_list": [ 00:09:52.335 { 00:09:52.335 "name": null, 00:09:52.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.335 "is_configured": false, 00:09:52.335 "data_offset": 2048, 00:09:52.335 "data_size": 63488 00:09:52.335 }, 00:09:52.335 { 00:09:52.335 "name": "pt2", 00:09:52.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.335 "is_configured": true, 00:09:52.335 "data_offset": 2048, 00:09:52.335 "data_size": 63488 00:09:52.335 }, 00:09:52.335 { 00:09:52.335 "name": null, 00:09:52.335 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:52.335 "is_configured": false, 00:09:52.335 "data_offset": 2048, 00:09:52.335 "data_size": 63488 00:09:52.335 } 00:09:52.335 ] 00:09:52.335 }' 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.335 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.611 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:52.611 11:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:52.611 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.611 11:48:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.878 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.878 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:52.878 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:52.878 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.879 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.879 [2024-11-27 11:48:19.030778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:52.879 [2024-11-27 11:48:19.030960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.879 [2024-11-27 11:48:19.031017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:52.879 [2024-11-27 11:48:19.031090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.879 [2024-11-27 11:48:19.031713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.879 [2024-11-27 11:48:19.031777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:52.879 [2024-11-27 11:48:19.031911] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:52.879 [2024-11-27 11:48:19.031968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:52.879 [2024-11-27 11:48:19.032139] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:52.879 [2024-11-27 11:48:19.032181] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:52.879 [2024-11-27 11:48:19.032469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:52.879 [2024-11-27 11:48:19.032683] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:52.879 [2024-11-27 11:48:19.032737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:52.879 [2024-11-27 11:48:19.032957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.879 pt3 00:09:52.879 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.879 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:52.879 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.879 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.879 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.879 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.879 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:52.879 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.879 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.879 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.879 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.880 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.880 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.880 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.880 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.880 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.880 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.880 "name": "raid_bdev1", 00:09:52.880 "uuid": "d94e519a-af80-45b6-b89d-55820f634ecf", 00:09:52.880 "strip_size_kb": 0, 00:09:52.880 "state": "online", 00:09:52.880 "raid_level": "raid1", 00:09:52.880 "superblock": true, 00:09:52.880 "num_base_bdevs": 3, 00:09:52.880 "num_base_bdevs_discovered": 2, 00:09:52.880 "num_base_bdevs_operational": 2, 00:09:52.880 "base_bdevs_list": [ 00:09:52.880 { 00:09:52.880 "name": null, 00:09:52.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.880 "is_configured": false, 00:09:52.880 "data_offset": 2048, 00:09:52.880 "data_size": 63488 00:09:52.880 }, 00:09:52.880 { 00:09:52.880 "name": "pt2", 00:09:52.880 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.880 "is_configured": true, 00:09:52.880 "data_offset": 2048, 00:09:52.880 "data_size": 63488 00:09:52.880 }, 00:09:52.880 { 00:09:52.880 "name": "pt3", 00:09:52.881 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:52.881 "is_configured": true, 00:09:52.881 "data_offset": 2048, 00:09:52.881 "data_size": 63488 00:09:52.881 } 00:09:52.881 ] 00:09:52.881 }' 00:09:52.881 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.881 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.143 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:53.143 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.143 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.143 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:53.143 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.143 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:53.143 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:53.143 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.143 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.403 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:53.403 [2024-11-27 11:48:19.530192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.403 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.403 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d94e519a-af80-45b6-b89d-55820f634ecf '!=' d94e519a-af80-45b6-b89d-55820f634ecf ']' 00:09:53.403 11:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68637 00:09:53.403 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68637 ']' 00:09:53.403 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68637 00:09:53.403 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:53.403 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.403 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68637 00:09:53.403 killing process with pid 68637 00:09:53.403 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.403 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.403 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68637' 00:09:53.403 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68637 00:09:53.403 [2024-11-27 11:48:19.609201] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.403 [2024-11-27 11:48:19.609302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.403 [2024-11-27 11:48:19.609365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.403 11:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68637 00:09:53.403 [2024-11-27 11:48:19.609377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:53.662 [2024-11-27 11:48:19.924612] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.042 ************************************ 00:09:55.042 END TEST raid_superblock_test 00:09:55.042 ************************************ 00:09:55.042 11:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:55.042 00:09:55.042 real 0m8.074s 00:09:55.042 user 0m12.612s 00:09:55.042 sys 0m1.391s 00:09:55.042 11:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.042 11:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.042 11:48:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:55.042 11:48:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:55.042 11:48:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.042 11:48:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.042 ************************************ 00:09:55.042 START TEST raid_read_error_test 00:09:55.042 ************************************ 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yb2keUeHuF 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69083 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69083 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69083 ']' 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.042 11:48:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.042 [2024-11-27 11:48:21.321326] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:09:55.042 [2024-11-27 11:48:21.321475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69083 ] 00:09:55.301 [2024-11-27 11:48:21.481440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.301 [2024-11-27 11:48:21.634245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.559 [2024-11-27 11:48:21.843815] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.559 [2024-11-27 11:48:21.843974] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.128 BaseBdev1_malloc 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.128 true 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.128 [2024-11-27 11:48:22.341975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:56.128 [2024-11-27 11:48:22.342171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.128 [2024-11-27 11:48:22.342211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:56.128 [2024-11-27 11:48:22.342225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.128 [2024-11-27 11:48:22.345071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.128 [2024-11-27 11:48:22.345138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:56.128 BaseBdev1 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.128 BaseBdev2_malloc 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.128 true 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.128 [2024-11-27 11:48:22.414448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:56.128 [2024-11-27 11:48:22.414534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.128 [2024-11-27 11:48:22.414560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:56.128 [2024-11-27 11:48:22.414572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.128 [2024-11-27 11:48:22.417215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.128 [2024-11-27 11:48:22.417269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:56.128 BaseBdev2 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.128 BaseBdev3_malloc 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.128 true 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.128 [2024-11-27 11:48:22.502141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:56.128 [2024-11-27 11:48:22.502224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.128 [2024-11-27 11:48:22.502250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:56.128 [2024-11-27 11:48:22.502262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.128 [2024-11-27 11:48:22.504925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.128 [2024-11-27 11:48:22.504979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:56.128 BaseBdev3 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.128 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.387 [2024-11-27 11:48:22.514242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.388 [2024-11-27 11:48:22.516395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.388 [2024-11-27 11:48:22.516490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.388 [2024-11-27 11:48:22.516753] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:56.388 [2024-11-27 11:48:22.516767] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:56.388 [2024-11-27 11:48:22.517098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:56.388 [2024-11-27 11:48:22.517317] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:56.388 [2024-11-27 11:48:22.517328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:56.388 [2024-11-27 11:48:22.517511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.388 "name": "raid_bdev1", 00:09:56.388 "uuid": "3c86af3e-0267-455b-a569-0c7edf70b7c4", 00:09:56.388 "strip_size_kb": 0, 00:09:56.388 "state": "online", 00:09:56.388 "raid_level": "raid1", 00:09:56.388 "superblock": true, 00:09:56.388 "num_base_bdevs": 3, 00:09:56.388 "num_base_bdevs_discovered": 3, 00:09:56.388 "num_base_bdevs_operational": 3, 00:09:56.388 "base_bdevs_list": [ 00:09:56.388 { 00:09:56.388 "name": "BaseBdev1", 00:09:56.388 "uuid": "8ac76960-6cc1-51ec-8ff6-386996b7adc6", 00:09:56.388 "is_configured": true, 00:09:56.388 "data_offset": 2048, 00:09:56.388 "data_size": 63488 00:09:56.388 }, 00:09:56.388 { 00:09:56.388 "name": "BaseBdev2", 00:09:56.388 "uuid": "ab29351e-91ce-5c70-90b5-2783a03de8ba", 00:09:56.388 "is_configured": true, 00:09:56.388 "data_offset": 2048, 00:09:56.388 "data_size": 63488 00:09:56.388 }, 00:09:56.388 { 00:09:56.388 "name": "BaseBdev3", 00:09:56.388 "uuid": "193b4f45-8389-5bbd-ae46-35cb7984d892", 00:09:56.388 "is_configured": true, 00:09:56.388 "data_offset": 2048, 00:09:56.388 "data_size": 63488 00:09:56.388 } 00:09:56.388 ] 00:09:56.388 }' 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.388 11:48:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.647 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:56.647 11:48:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:56.907 [2024-11-27 11:48:23.090665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:57.855 11:48:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:57.855 11:48:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.855 "name": "raid_bdev1", 00:09:57.855 "uuid": "3c86af3e-0267-455b-a569-0c7edf70b7c4", 00:09:57.855 "strip_size_kb": 0, 00:09:57.855 "state": "online", 00:09:57.855 "raid_level": "raid1", 00:09:57.855 "superblock": true, 00:09:57.855 "num_base_bdevs": 3, 00:09:57.855 "num_base_bdevs_discovered": 3, 00:09:57.855 "num_base_bdevs_operational": 3, 00:09:57.855 "base_bdevs_list": [ 00:09:57.855 { 00:09:57.855 "name": "BaseBdev1", 00:09:57.855 "uuid": "8ac76960-6cc1-51ec-8ff6-386996b7adc6", 00:09:57.855 "is_configured": true, 00:09:57.855 "data_offset": 2048, 00:09:57.855 "data_size": 63488 00:09:57.855 }, 00:09:57.855 { 00:09:57.855 "name": "BaseBdev2", 00:09:57.855 "uuid": "ab29351e-91ce-5c70-90b5-2783a03de8ba", 00:09:57.855 "is_configured": true, 00:09:57.855 "data_offset": 2048, 00:09:57.855 "data_size": 63488 00:09:57.855 }, 00:09:57.855 { 00:09:57.855 "name": "BaseBdev3", 00:09:57.855 "uuid": "193b4f45-8389-5bbd-ae46-35cb7984d892", 00:09:57.855 "is_configured": true, 00:09:57.855 "data_offset": 2048, 00:09:57.855 "data_size": 63488 00:09:57.855 } 00:09:57.855 ] 00:09:57.855 }' 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.855 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.115 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:58.115 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.116 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.116 [2024-11-27 11:48:24.481393] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.116 [2024-11-27 11:48:24.481440] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.116 [2024-11-27 11:48:24.484774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.116 [2024-11-27 11:48:24.484832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.116 [2024-11-27 11:48:24.484954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.116 [2024-11-27 11:48:24.484966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:58.116 { 00:09:58.116 "results": [ 00:09:58.116 { 00:09:58.116 "job": "raid_bdev1", 00:09:58.116 "core_mask": "0x1", 00:09:58.116 "workload": "randrw", 00:09:58.116 "percentage": 50, 00:09:58.116 "status": "finished", 00:09:58.116 "queue_depth": 1, 00:09:58.116 "io_size": 131072, 00:09:58.116 "runtime": 1.391316, 00:09:58.116 "iops": 11390.654603267698, 00:09:58.116 "mibps": 1423.8318254084622, 00:09:58.116 "io_failed": 0, 00:09:58.116 "io_timeout": 0, 00:09:58.116 "avg_latency_us": 84.66154229371166, 00:09:58.116 "min_latency_us": 25.6, 00:09:58.116 "max_latency_us": 1738.564192139738 00:09:58.116 } 00:09:58.116 ], 00:09:58.116 "core_count": 1 00:09:58.116 } 00:09:58.116 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.116 11:48:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69083 00:09:58.116 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69083 ']' 00:09:58.116 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69083 00:09:58.116 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:58.116 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.116 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69083 00:09:58.374 killing process with pid 69083 00:09:58.374 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.374 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.374 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69083' 00:09:58.374 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69083 00:09:58.374 [2024-11-27 11:48:24.520241] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.374 11:48:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69083 00:09:58.633 [2024-11-27 11:48:24.770196] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:00.013 11:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yb2keUeHuF 00:10:00.013 11:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:00.013 11:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:00.013 11:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:00.013 ************************************ 00:10:00.013 END TEST raid_read_error_test 00:10:00.013 ************************************ 00:10:00.013 11:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:00.013 11:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:00.013 11:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:00.013 11:48:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:00.013 00:10:00.013 real 0m4.891s 00:10:00.013 user 0m5.896s 00:10:00.013 sys 0m0.576s 00:10:00.013 11:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.013 11:48:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.013 11:48:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:00.013 11:48:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:00.013 11:48:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.013 11:48:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:00.013 ************************************ 00:10:00.013 START TEST raid_write_error_test 00:10:00.013 ************************************ 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:00.013 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:00.014 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:00.014 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:00.014 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Kk8YD6nr1k 00:10:00.014 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69235 00:10:00.014 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:00.014 11:48:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69235 00:10:00.014 11:48:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69235 ']' 00:10:00.014 11:48:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.014 11:48:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.014 11:48:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.014 11:48:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.014 11:48:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.014 [2024-11-27 11:48:26.278530] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:10:00.014 [2024-11-27 11:48:26.278669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69235 ] 00:10:00.273 [2024-11-27 11:48:26.456856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.273 [2024-11-27 11:48:26.585425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.533 [2024-11-27 11:48:26.812582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.533 [2024-11-27 11:48:26.812747] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.102 BaseBdev1_malloc 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.102 true 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.102 [2024-11-27 11:48:27.245537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:01.102 [2024-11-27 11:48:27.245618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.102 [2024-11-27 11:48:27.245646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:01.102 [2024-11-27 11:48:27.245659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.102 [2024-11-27 11:48:27.248280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.102 [2024-11-27 11:48:27.248448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:01.102 BaseBdev1 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.102 BaseBdev2_malloc 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:01.102 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.103 true 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.103 [2024-11-27 11:48:27.317462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:01.103 [2024-11-27 11:48:27.317547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.103 [2024-11-27 11:48:27.317572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:01.103 [2024-11-27 11:48:27.317585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.103 [2024-11-27 11:48:27.320200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.103 [2024-11-27 11:48:27.320254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:01.103 BaseBdev2 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.103 BaseBdev3_malloc 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.103 true 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.103 [2024-11-27 11:48:27.411091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:01.103 [2024-11-27 11:48:27.411157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.103 [2024-11-27 11:48:27.411198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:01.103 [2024-11-27 11:48:27.411209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.103 [2024-11-27 11:48:27.413666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.103 [2024-11-27 11:48:27.413711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:01.103 BaseBdev3 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.103 [2024-11-27 11:48:27.423134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.103 [2024-11-27 11:48:27.425253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.103 [2024-11-27 11:48:27.425337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.103 [2024-11-27 11:48:27.425566] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:01.103 [2024-11-27 11:48:27.425580] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:01.103 [2024-11-27 11:48:27.425889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:01.103 [2024-11-27 11:48:27.426081] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:01.103 [2024-11-27 11:48:27.426094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:01.103 [2024-11-27 11:48:27.426283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.103 "name": "raid_bdev1", 00:10:01.103 "uuid": "a0c2faae-410e-4c81-b0d0-de0100bbf57c", 00:10:01.103 "strip_size_kb": 0, 00:10:01.103 "state": "online", 00:10:01.103 "raid_level": "raid1", 00:10:01.103 "superblock": true, 00:10:01.103 "num_base_bdevs": 3, 00:10:01.103 "num_base_bdevs_discovered": 3, 00:10:01.103 "num_base_bdevs_operational": 3, 00:10:01.103 "base_bdevs_list": [ 00:10:01.103 { 00:10:01.103 "name": "BaseBdev1", 00:10:01.103 "uuid": "e70abc70-912e-5296-a675-2701f6dd2106", 00:10:01.103 "is_configured": true, 00:10:01.103 "data_offset": 2048, 00:10:01.103 "data_size": 63488 00:10:01.103 }, 00:10:01.103 { 00:10:01.103 "name": "BaseBdev2", 00:10:01.103 "uuid": "8177a1d9-080c-5abe-a4b3-fb4535c8323e", 00:10:01.103 "is_configured": true, 00:10:01.103 "data_offset": 2048, 00:10:01.103 "data_size": 63488 00:10:01.103 }, 00:10:01.103 { 00:10:01.103 "name": "BaseBdev3", 00:10:01.103 "uuid": "56511dc4-70e2-55b5-8b73-286928cbe030", 00:10:01.103 "is_configured": true, 00:10:01.103 "data_offset": 2048, 00:10:01.103 "data_size": 63488 00:10:01.103 } 00:10:01.103 ] 00:10:01.103 }' 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.103 11:48:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.672 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:01.672 11:48:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:01.672 [2024-11-27 11:48:27.971653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.611 [2024-11-27 11:48:28.884298] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:02.611 [2024-11-27 11:48:28.884442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:02.611 [2024-11-27 11:48:28.884701] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.611 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.612 11:48:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.612 11:48:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.612 11:48:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.612 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.612 "name": "raid_bdev1", 00:10:02.612 "uuid": "a0c2faae-410e-4c81-b0d0-de0100bbf57c", 00:10:02.612 "strip_size_kb": 0, 00:10:02.612 "state": "online", 00:10:02.612 "raid_level": "raid1", 00:10:02.612 "superblock": true, 00:10:02.612 "num_base_bdevs": 3, 00:10:02.612 "num_base_bdevs_discovered": 2, 00:10:02.612 "num_base_bdevs_operational": 2, 00:10:02.612 "base_bdevs_list": [ 00:10:02.612 { 00:10:02.612 "name": null, 00:10:02.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.612 "is_configured": false, 00:10:02.612 "data_offset": 0, 00:10:02.612 "data_size": 63488 00:10:02.612 }, 00:10:02.612 { 00:10:02.612 "name": "BaseBdev2", 00:10:02.612 "uuid": "8177a1d9-080c-5abe-a4b3-fb4535c8323e", 00:10:02.612 "is_configured": true, 00:10:02.612 "data_offset": 2048, 00:10:02.612 "data_size": 63488 00:10:02.612 }, 00:10:02.612 { 00:10:02.612 "name": "BaseBdev3", 00:10:02.612 "uuid": "56511dc4-70e2-55b5-8b73-286928cbe030", 00:10:02.612 "is_configured": true, 00:10:02.612 "data_offset": 2048, 00:10:02.612 "data_size": 63488 00:10:02.612 } 00:10:02.612 ] 00:10:02.612 }' 00:10:02.612 11:48:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.612 11:48:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.180 11:48:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:03.180 11:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.180 11:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.180 [2024-11-27 11:48:29.332295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:03.180 [2024-11-27 11:48:29.332332] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:03.180 [2024-11-27 11:48:29.335085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.180 [2024-11-27 11:48:29.335152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.180 [2024-11-27 11:48:29.335234] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:03.180 [2024-11-27 11:48:29.335249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:03.181 { 00:10:03.181 "results": [ 00:10:03.181 { 00:10:03.181 "job": "raid_bdev1", 00:10:03.181 "core_mask": "0x1", 00:10:03.181 "workload": "randrw", 00:10:03.181 "percentage": 50, 00:10:03.181 "status": "finished", 00:10:03.181 "queue_depth": 1, 00:10:03.181 "io_size": 131072, 00:10:03.181 "runtime": 1.361293, 00:10:03.181 "iops": 13448.243691842976, 00:10:03.181 "mibps": 1681.030461480372, 00:10:03.181 "io_failed": 0, 00:10:03.181 "io_timeout": 0, 00:10:03.181 "avg_latency_us": 71.38264805764277, 00:10:03.181 "min_latency_us": 23.811353711790392, 00:10:03.181 "max_latency_us": 1767.1825327510917 00:10:03.181 } 00:10:03.181 ], 00:10:03.181 "core_count": 1 00:10:03.181 } 00:10:03.181 11:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.181 11:48:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69235 00:10:03.181 11:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69235 ']' 00:10:03.181 11:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69235 00:10:03.181 11:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:03.181 11:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.181 11:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69235 00:10:03.181 11:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.181 11:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.181 killing process with pid 69235 00:10:03.181 11:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69235' 00:10:03.181 11:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69235 00:10:03.181 [2024-11-27 11:48:29.382150] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:03.181 11:48:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69235 00:10:03.440 [2024-11-27 11:48:29.617318] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.822 11:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Kk8YD6nr1k 00:10:04.822 11:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:04.822 11:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:04.822 11:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:04.822 11:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:04.822 ************************************ 00:10:04.822 END TEST raid_write_error_test 00:10:04.822 ************************************ 00:10:04.822 11:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:04.822 11:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:04.822 11:48:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:04.822 00:10:04.822 real 0m4.651s 00:10:04.822 user 0m5.552s 00:10:04.822 sys 0m0.580s 00:10:04.822 11:48:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.822 11:48:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.822 11:48:30 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:04.822 11:48:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:04.822 11:48:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:04.822 11:48:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:04.822 11:48:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.822 11:48:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.822 ************************************ 00:10:04.822 START TEST raid_state_function_test 00:10:04.822 ************************************ 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69373 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69373' 00:10:04.822 Process raid pid: 69373 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69373 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69373 ']' 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.822 11:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.822 [2024-11-27 11:48:30.988667] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:10:04.822 [2024-11-27 11:48:30.988788] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.822 [2024-11-27 11:48:31.165575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.082 [2024-11-27 11:48:31.290503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.341 [2024-11-27 11:48:31.495275] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.341 [2024-11-27 11:48:31.495312] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.602 [2024-11-27 11:48:31.837292] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.602 [2024-11-27 11:48:31.837363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.602 [2024-11-27 11:48:31.837374] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.602 [2024-11-27 11:48:31.837384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.602 [2024-11-27 11:48:31.837391] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.602 [2024-11-27 11:48:31.837401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.602 [2024-11-27 11:48:31.837407] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:05.602 [2024-11-27 11:48:31.837416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.602 "name": "Existed_Raid", 00:10:05.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.602 "strip_size_kb": 64, 00:10:05.602 "state": "configuring", 00:10:05.602 "raid_level": "raid0", 00:10:05.602 "superblock": false, 00:10:05.602 "num_base_bdevs": 4, 00:10:05.602 "num_base_bdevs_discovered": 0, 00:10:05.602 "num_base_bdevs_operational": 4, 00:10:05.602 "base_bdevs_list": [ 00:10:05.602 { 00:10:05.602 "name": "BaseBdev1", 00:10:05.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.602 "is_configured": false, 00:10:05.602 "data_offset": 0, 00:10:05.602 "data_size": 0 00:10:05.602 }, 00:10:05.602 { 00:10:05.602 "name": "BaseBdev2", 00:10:05.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.602 "is_configured": false, 00:10:05.602 "data_offset": 0, 00:10:05.602 "data_size": 0 00:10:05.602 }, 00:10:05.602 { 00:10:05.602 "name": "BaseBdev3", 00:10:05.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.602 "is_configured": false, 00:10:05.602 "data_offset": 0, 00:10:05.602 "data_size": 0 00:10:05.602 }, 00:10:05.602 { 00:10:05.602 "name": "BaseBdev4", 00:10:05.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.602 "is_configured": false, 00:10:05.602 "data_offset": 0, 00:10:05.602 "data_size": 0 00:10:05.602 } 00:10:05.602 ] 00:10:05.602 }' 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.602 11:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.173 [2024-11-27 11:48:32.272488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.173 [2024-11-27 11:48:32.272532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.173 [2024-11-27 11:48:32.284468] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.173 [2024-11-27 11:48:32.284560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.173 [2024-11-27 11:48:32.284594] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.173 [2024-11-27 11:48:32.284622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.173 [2024-11-27 11:48:32.284643] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.173 [2024-11-27 11:48:32.284667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.173 [2024-11-27 11:48:32.284725] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:06.173 [2024-11-27 11:48:32.284751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.173 [2024-11-27 11:48:32.334522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.173 BaseBdev1 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.173 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.173 [ 00:10:06.173 { 00:10:06.173 "name": "BaseBdev1", 00:10:06.173 "aliases": [ 00:10:06.173 "2f1b8a03-3ed6-4885-8b3d-05361594ef23" 00:10:06.173 ], 00:10:06.173 "product_name": "Malloc disk", 00:10:06.173 "block_size": 512, 00:10:06.173 "num_blocks": 65536, 00:10:06.173 "uuid": "2f1b8a03-3ed6-4885-8b3d-05361594ef23", 00:10:06.173 "assigned_rate_limits": { 00:10:06.173 "rw_ios_per_sec": 0, 00:10:06.173 "rw_mbytes_per_sec": 0, 00:10:06.173 "r_mbytes_per_sec": 0, 00:10:06.173 "w_mbytes_per_sec": 0 00:10:06.173 }, 00:10:06.173 "claimed": true, 00:10:06.173 "claim_type": "exclusive_write", 00:10:06.173 "zoned": false, 00:10:06.173 "supported_io_types": { 00:10:06.173 "read": true, 00:10:06.173 "write": true, 00:10:06.173 "unmap": true, 00:10:06.173 "flush": true, 00:10:06.173 "reset": true, 00:10:06.173 "nvme_admin": false, 00:10:06.173 "nvme_io": false, 00:10:06.173 "nvme_io_md": false, 00:10:06.173 "write_zeroes": true, 00:10:06.173 "zcopy": true, 00:10:06.173 "get_zone_info": false, 00:10:06.173 "zone_management": false, 00:10:06.173 "zone_append": false, 00:10:06.173 "compare": false, 00:10:06.173 "compare_and_write": false, 00:10:06.173 "abort": true, 00:10:06.173 "seek_hole": false, 00:10:06.173 "seek_data": false, 00:10:06.173 "copy": true, 00:10:06.173 "nvme_iov_md": false 00:10:06.173 }, 00:10:06.173 "memory_domains": [ 00:10:06.173 { 00:10:06.173 "dma_device_id": "system", 00:10:06.173 "dma_device_type": 1 00:10:06.173 }, 00:10:06.173 { 00:10:06.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.173 "dma_device_type": 2 00:10:06.173 } 00:10:06.173 ], 00:10:06.173 "driver_specific": {} 00:10:06.173 } 00:10:06.173 ] 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.174 "name": "Existed_Raid", 00:10:06.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.174 "strip_size_kb": 64, 00:10:06.174 "state": "configuring", 00:10:06.174 "raid_level": "raid0", 00:10:06.174 "superblock": false, 00:10:06.174 "num_base_bdevs": 4, 00:10:06.174 "num_base_bdevs_discovered": 1, 00:10:06.174 "num_base_bdevs_operational": 4, 00:10:06.174 "base_bdevs_list": [ 00:10:06.174 { 00:10:06.174 "name": "BaseBdev1", 00:10:06.174 "uuid": "2f1b8a03-3ed6-4885-8b3d-05361594ef23", 00:10:06.174 "is_configured": true, 00:10:06.174 "data_offset": 0, 00:10:06.174 "data_size": 65536 00:10:06.174 }, 00:10:06.174 { 00:10:06.174 "name": "BaseBdev2", 00:10:06.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.174 "is_configured": false, 00:10:06.174 "data_offset": 0, 00:10:06.174 "data_size": 0 00:10:06.174 }, 00:10:06.174 { 00:10:06.174 "name": "BaseBdev3", 00:10:06.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.174 "is_configured": false, 00:10:06.174 "data_offset": 0, 00:10:06.174 "data_size": 0 00:10:06.174 }, 00:10:06.174 { 00:10:06.174 "name": "BaseBdev4", 00:10:06.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.174 "is_configured": false, 00:10:06.174 "data_offset": 0, 00:10:06.174 "data_size": 0 00:10:06.174 } 00:10:06.174 ] 00:10:06.174 }' 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.174 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.743 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.743 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.743 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.743 [2024-11-27 11:48:32.849721] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.743 [2024-11-27 11:48:32.849780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.744 [2024-11-27 11:48:32.861747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.744 [2024-11-27 11:48:32.863665] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.744 [2024-11-27 11:48:32.863714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.744 [2024-11-27 11:48:32.863725] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.744 [2024-11-27 11:48:32.863738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.744 [2024-11-27 11:48:32.863745] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:06.744 [2024-11-27 11:48:32.863755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.744 "name": "Existed_Raid", 00:10:06.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.744 "strip_size_kb": 64, 00:10:06.744 "state": "configuring", 00:10:06.744 "raid_level": "raid0", 00:10:06.744 "superblock": false, 00:10:06.744 "num_base_bdevs": 4, 00:10:06.744 "num_base_bdevs_discovered": 1, 00:10:06.744 "num_base_bdevs_operational": 4, 00:10:06.744 "base_bdevs_list": [ 00:10:06.744 { 00:10:06.744 "name": "BaseBdev1", 00:10:06.744 "uuid": "2f1b8a03-3ed6-4885-8b3d-05361594ef23", 00:10:06.744 "is_configured": true, 00:10:06.744 "data_offset": 0, 00:10:06.744 "data_size": 65536 00:10:06.744 }, 00:10:06.744 { 00:10:06.744 "name": "BaseBdev2", 00:10:06.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.744 "is_configured": false, 00:10:06.744 "data_offset": 0, 00:10:06.744 "data_size": 0 00:10:06.744 }, 00:10:06.744 { 00:10:06.744 "name": "BaseBdev3", 00:10:06.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.744 "is_configured": false, 00:10:06.744 "data_offset": 0, 00:10:06.744 "data_size": 0 00:10:06.744 }, 00:10:06.744 { 00:10:06.744 "name": "BaseBdev4", 00:10:06.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.744 "is_configured": false, 00:10:06.744 "data_offset": 0, 00:10:06.744 "data_size": 0 00:10:06.744 } 00:10:06.744 ] 00:10:06.744 }' 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.744 11:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.004 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:07.004 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.004 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.004 [2024-11-27 11:48:33.316143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:07.004 BaseBdev2 00:10:07.004 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.004 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:07.004 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:07.004 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.004 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:07.004 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.004 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.004 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.004 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.004 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.004 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.004 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:07.004 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.004 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.004 [ 00:10:07.004 { 00:10:07.004 "name": "BaseBdev2", 00:10:07.004 "aliases": [ 00:10:07.004 "443aa9a9-a0da-4d01-b333-7c83be4673e0" 00:10:07.004 ], 00:10:07.004 "product_name": "Malloc disk", 00:10:07.004 "block_size": 512, 00:10:07.004 "num_blocks": 65536, 00:10:07.004 "uuid": "443aa9a9-a0da-4d01-b333-7c83be4673e0", 00:10:07.004 "assigned_rate_limits": { 00:10:07.004 "rw_ios_per_sec": 0, 00:10:07.004 "rw_mbytes_per_sec": 0, 00:10:07.004 "r_mbytes_per_sec": 0, 00:10:07.004 "w_mbytes_per_sec": 0 00:10:07.004 }, 00:10:07.005 "claimed": true, 00:10:07.005 "claim_type": "exclusive_write", 00:10:07.005 "zoned": false, 00:10:07.005 "supported_io_types": { 00:10:07.005 "read": true, 00:10:07.005 "write": true, 00:10:07.005 "unmap": true, 00:10:07.005 "flush": true, 00:10:07.005 "reset": true, 00:10:07.005 "nvme_admin": false, 00:10:07.005 "nvme_io": false, 00:10:07.005 "nvme_io_md": false, 00:10:07.005 "write_zeroes": true, 00:10:07.005 "zcopy": true, 00:10:07.005 "get_zone_info": false, 00:10:07.005 "zone_management": false, 00:10:07.005 "zone_append": false, 00:10:07.005 "compare": false, 00:10:07.005 "compare_and_write": false, 00:10:07.005 "abort": true, 00:10:07.005 "seek_hole": false, 00:10:07.005 "seek_data": false, 00:10:07.005 "copy": true, 00:10:07.005 "nvme_iov_md": false 00:10:07.005 }, 00:10:07.005 "memory_domains": [ 00:10:07.005 { 00:10:07.005 "dma_device_id": "system", 00:10:07.005 "dma_device_type": 1 00:10:07.005 }, 00:10:07.005 { 00:10:07.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.005 "dma_device_type": 2 00:10:07.005 } 00:10:07.005 ], 00:10:07.005 "driver_specific": {} 00:10:07.005 } 00:10:07.005 ] 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.005 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.265 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.265 "name": "Existed_Raid", 00:10:07.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.265 "strip_size_kb": 64, 00:10:07.265 "state": "configuring", 00:10:07.265 "raid_level": "raid0", 00:10:07.265 "superblock": false, 00:10:07.265 "num_base_bdevs": 4, 00:10:07.265 "num_base_bdevs_discovered": 2, 00:10:07.265 "num_base_bdevs_operational": 4, 00:10:07.265 "base_bdevs_list": [ 00:10:07.265 { 00:10:07.265 "name": "BaseBdev1", 00:10:07.265 "uuid": "2f1b8a03-3ed6-4885-8b3d-05361594ef23", 00:10:07.265 "is_configured": true, 00:10:07.265 "data_offset": 0, 00:10:07.265 "data_size": 65536 00:10:07.265 }, 00:10:07.265 { 00:10:07.265 "name": "BaseBdev2", 00:10:07.265 "uuid": "443aa9a9-a0da-4d01-b333-7c83be4673e0", 00:10:07.265 "is_configured": true, 00:10:07.265 "data_offset": 0, 00:10:07.265 "data_size": 65536 00:10:07.265 }, 00:10:07.265 { 00:10:07.265 "name": "BaseBdev3", 00:10:07.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.265 "is_configured": false, 00:10:07.265 "data_offset": 0, 00:10:07.265 "data_size": 0 00:10:07.265 }, 00:10:07.265 { 00:10:07.265 "name": "BaseBdev4", 00:10:07.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.265 "is_configured": false, 00:10:07.265 "data_offset": 0, 00:10:07.265 "data_size": 0 00:10:07.265 } 00:10:07.265 ] 00:10:07.265 }' 00:10:07.265 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.265 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.525 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:07.525 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.525 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.525 [2024-11-27 11:48:33.863112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.525 BaseBdev3 00:10:07.525 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.525 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:07.525 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:07.525 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.525 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:07.525 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.525 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.525 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.525 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.525 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.525 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.525 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:07.525 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.525 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.526 [ 00:10:07.526 { 00:10:07.526 "name": "BaseBdev3", 00:10:07.526 "aliases": [ 00:10:07.526 "8c553845-6b70-40e2-95c9-5770cd59a930" 00:10:07.526 ], 00:10:07.526 "product_name": "Malloc disk", 00:10:07.526 "block_size": 512, 00:10:07.526 "num_blocks": 65536, 00:10:07.526 "uuid": "8c553845-6b70-40e2-95c9-5770cd59a930", 00:10:07.526 "assigned_rate_limits": { 00:10:07.526 "rw_ios_per_sec": 0, 00:10:07.526 "rw_mbytes_per_sec": 0, 00:10:07.526 "r_mbytes_per_sec": 0, 00:10:07.526 "w_mbytes_per_sec": 0 00:10:07.526 }, 00:10:07.526 "claimed": true, 00:10:07.526 "claim_type": "exclusive_write", 00:10:07.526 "zoned": false, 00:10:07.526 "supported_io_types": { 00:10:07.526 "read": true, 00:10:07.526 "write": true, 00:10:07.526 "unmap": true, 00:10:07.526 "flush": true, 00:10:07.526 "reset": true, 00:10:07.526 "nvme_admin": false, 00:10:07.526 "nvme_io": false, 00:10:07.526 "nvme_io_md": false, 00:10:07.526 "write_zeroes": true, 00:10:07.526 "zcopy": true, 00:10:07.526 "get_zone_info": false, 00:10:07.526 "zone_management": false, 00:10:07.526 "zone_append": false, 00:10:07.526 "compare": false, 00:10:07.526 "compare_and_write": false, 00:10:07.526 "abort": true, 00:10:07.526 "seek_hole": false, 00:10:07.526 "seek_data": false, 00:10:07.526 "copy": true, 00:10:07.526 "nvme_iov_md": false 00:10:07.526 }, 00:10:07.526 "memory_domains": [ 00:10:07.526 { 00:10:07.526 "dma_device_id": "system", 00:10:07.526 "dma_device_type": 1 00:10:07.526 }, 00:10:07.526 { 00:10:07.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.526 "dma_device_type": 2 00:10:07.526 } 00:10:07.526 ], 00:10:07.526 "driver_specific": {} 00:10:07.526 } 00:10:07.526 ] 00:10:07.526 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.526 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:07.526 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:07.526 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.526 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.526 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.526 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.526 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.526 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.526 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.526 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.526 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.526 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.526 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.526 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.526 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.526 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.785 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.785 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.785 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.785 "name": "Existed_Raid", 00:10:07.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.785 "strip_size_kb": 64, 00:10:07.785 "state": "configuring", 00:10:07.785 "raid_level": "raid0", 00:10:07.785 "superblock": false, 00:10:07.785 "num_base_bdevs": 4, 00:10:07.785 "num_base_bdevs_discovered": 3, 00:10:07.785 "num_base_bdevs_operational": 4, 00:10:07.785 "base_bdevs_list": [ 00:10:07.785 { 00:10:07.785 "name": "BaseBdev1", 00:10:07.785 "uuid": "2f1b8a03-3ed6-4885-8b3d-05361594ef23", 00:10:07.785 "is_configured": true, 00:10:07.785 "data_offset": 0, 00:10:07.785 "data_size": 65536 00:10:07.785 }, 00:10:07.785 { 00:10:07.785 "name": "BaseBdev2", 00:10:07.785 "uuid": "443aa9a9-a0da-4d01-b333-7c83be4673e0", 00:10:07.785 "is_configured": true, 00:10:07.785 "data_offset": 0, 00:10:07.785 "data_size": 65536 00:10:07.785 }, 00:10:07.785 { 00:10:07.785 "name": "BaseBdev3", 00:10:07.785 "uuid": "8c553845-6b70-40e2-95c9-5770cd59a930", 00:10:07.785 "is_configured": true, 00:10:07.785 "data_offset": 0, 00:10:07.785 "data_size": 65536 00:10:07.785 }, 00:10:07.785 { 00:10:07.785 "name": "BaseBdev4", 00:10:07.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.785 "is_configured": false, 00:10:07.785 "data_offset": 0, 00:10:07.785 "data_size": 0 00:10:07.785 } 00:10:07.785 ] 00:10:07.785 }' 00:10:07.785 11:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.785 11:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.053 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:08.053 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.053 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.053 [2024-11-27 11:48:34.345732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:08.053 [2024-11-27 11:48:34.345783] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:08.053 [2024-11-27 11:48:34.345793] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:08.053 [2024-11-27 11:48:34.346131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:08.053 [2024-11-27 11:48:34.346298] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:08.053 [2024-11-27 11:48:34.346328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:08.053 [2024-11-27 11:48:34.346593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.053 BaseBdev4 00:10:08.053 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.053 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:08.053 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:08.053 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.053 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:08.053 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.053 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.053 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.054 [ 00:10:08.054 { 00:10:08.054 "name": "BaseBdev4", 00:10:08.054 "aliases": [ 00:10:08.054 "25d2aaf2-d88a-43c9-a09d-b1955e17a3a5" 00:10:08.054 ], 00:10:08.054 "product_name": "Malloc disk", 00:10:08.054 "block_size": 512, 00:10:08.054 "num_blocks": 65536, 00:10:08.054 "uuid": "25d2aaf2-d88a-43c9-a09d-b1955e17a3a5", 00:10:08.054 "assigned_rate_limits": { 00:10:08.054 "rw_ios_per_sec": 0, 00:10:08.054 "rw_mbytes_per_sec": 0, 00:10:08.054 "r_mbytes_per_sec": 0, 00:10:08.054 "w_mbytes_per_sec": 0 00:10:08.054 }, 00:10:08.054 "claimed": true, 00:10:08.054 "claim_type": "exclusive_write", 00:10:08.054 "zoned": false, 00:10:08.054 "supported_io_types": { 00:10:08.054 "read": true, 00:10:08.054 "write": true, 00:10:08.054 "unmap": true, 00:10:08.054 "flush": true, 00:10:08.054 "reset": true, 00:10:08.054 "nvme_admin": false, 00:10:08.054 "nvme_io": false, 00:10:08.054 "nvme_io_md": false, 00:10:08.054 "write_zeroes": true, 00:10:08.054 "zcopy": true, 00:10:08.054 "get_zone_info": false, 00:10:08.054 "zone_management": false, 00:10:08.054 "zone_append": false, 00:10:08.054 "compare": false, 00:10:08.054 "compare_and_write": false, 00:10:08.054 "abort": true, 00:10:08.054 "seek_hole": false, 00:10:08.054 "seek_data": false, 00:10:08.054 "copy": true, 00:10:08.054 "nvme_iov_md": false 00:10:08.054 }, 00:10:08.054 "memory_domains": [ 00:10:08.054 { 00:10:08.054 "dma_device_id": "system", 00:10:08.054 "dma_device_type": 1 00:10:08.054 }, 00:10:08.054 { 00:10:08.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.054 "dma_device_type": 2 00:10:08.054 } 00:10:08.054 ], 00:10:08.054 "driver_specific": {} 00:10:08.054 } 00:10:08.054 ] 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.054 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.313 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.313 "name": "Existed_Raid", 00:10:08.313 "uuid": "cbc19c5f-dfe4-4a7d-ad35-853330e3858c", 00:10:08.313 "strip_size_kb": 64, 00:10:08.313 "state": "online", 00:10:08.313 "raid_level": "raid0", 00:10:08.313 "superblock": false, 00:10:08.313 "num_base_bdevs": 4, 00:10:08.313 "num_base_bdevs_discovered": 4, 00:10:08.313 "num_base_bdevs_operational": 4, 00:10:08.313 "base_bdevs_list": [ 00:10:08.313 { 00:10:08.313 "name": "BaseBdev1", 00:10:08.313 "uuid": "2f1b8a03-3ed6-4885-8b3d-05361594ef23", 00:10:08.313 "is_configured": true, 00:10:08.313 "data_offset": 0, 00:10:08.313 "data_size": 65536 00:10:08.313 }, 00:10:08.313 { 00:10:08.313 "name": "BaseBdev2", 00:10:08.313 "uuid": "443aa9a9-a0da-4d01-b333-7c83be4673e0", 00:10:08.313 "is_configured": true, 00:10:08.313 "data_offset": 0, 00:10:08.313 "data_size": 65536 00:10:08.313 }, 00:10:08.313 { 00:10:08.313 "name": "BaseBdev3", 00:10:08.313 "uuid": "8c553845-6b70-40e2-95c9-5770cd59a930", 00:10:08.313 "is_configured": true, 00:10:08.313 "data_offset": 0, 00:10:08.313 "data_size": 65536 00:10:08.313 }, 00:10:08.313 { 00:10:08.313 "name": "BaseBdev4", 00:10:08.313 "uuid": "25d2aaf2-d88a-43c9-a09d-b1955e17a3a5", 00:10:08.313 "is_configured": true, 00:10:08.313 "data_offset": 0, 00:10:08.313 "data_size": 65536 00:10:08.313 } 00:10:08.313 ] 00:10:08.313 }' 00:10:08.313 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.313 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.573 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:08.573 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:08.573 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.573 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.573 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.573 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.573 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:08.573 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.573 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.573 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.573 [2024-11-27 11:48:34.853334] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.573 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.573 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.573 "name": "Existed_Raid", 00:10:08.573 "aliases": [ 00:10:08.573 "cbc19c5f-dfe4-4a7d-ad35-853330e3858c" 00:10:08.573 ], 00:10:08.573 "product_name": "Raid Volume", 00:10:08.573 "block_size": 512, 00:10:08.574 "num_blocks": 262144, 00:10:08.574 "uuid": "cbc19c5f-dfe4-4a7d-ad35-853330e3858c", 00:10:08.574 "assigned_rate_limits": { 00:10:08.574 "rw_ios_per_sec": 0, 00:10:08.574 "rw_mbytes_per_sec": 0, 00:10:08.574 "r_mbytes_per_sec": 0, 00:10:08.574 "w_mbytes_per_sec": 0 00:10:08.574 }, 00:10:08.574 "claimed": false, 00:10:08.574 "zoned": false, 00:10:08.574 "supported_io_types": { 00:10:08.574 "read": true, 00:10:08.574 "write": true, 00:10:08.574 "unmap": true, 00:10:08.574 "flush": true, 00:10:08.574 "reset": true, 00:10:08.574 "nvme_admin": false, 00:10:08.574 "nvme_io": false, 00:10:08.574 "nvme_io_md": false, 00:10:08.574 "write_zeroes": true, 00:10:08.574 "zcopy": false, 00:10:08.574 "get_zone_info": false, 00:10:08.574 "zone_management": false, 00:10:08.574 "zone_append": false, 00:10:08.574 "compare": false, 00:10:08.574 "compare_and_write": false, 00:10:08.574 "abort": false, 00:10:08.574 "seek_hole": false, 00:10:08.574 "seek_data": false, 00:10:08.574 "copy": false, 00:10:08.574 "nvme_iov_md": false 00:10:08.574 }, 00:10:08.574 "memory_domains": [ 00:10:08.574 { 00:10:08.574 "dma_device_id": "system", 00:10:08.574 "dma_device_type": 1 00:10:08.574 }, 00:10:08.574 { 00:10:08.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.574 "dma_device_type": 2 00:10:08.574 }, 00:10:08.574 { 00:10:08.574 "dma_device_id": "system", 00:10:08.574 "dma_device_type": 1 00:10:08.574 }, 00:10:08.574 { 00:10:08.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.574 "dma_device_type": 2 00:10:08.574 }, 00:10:08.574 { 00:10:08.574 "dma_device_id": "system", 00:10:08.574 "dma_device_type": 1 00:10:08.574 }, 00:10:08.574 { 00:10:08.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.574 "dma_device_type": 2 00:10:08.574 }, 00:10:08.574 { 00:10:08.574 "dma_device_id": "system", 00:10:08.574 "dma_device_type": 1 00:10:08.574 }, 00:10:08.574 { 00:10:08.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.574 "dma_device_type": 2 00:10:08.574 } 00:10:08.574 ], 00:10:08.574 "driver_specific": { 00:10:08.574 "raid": { 00:10:08.574 "uuid": "cbc19c5f-dfe4-4a7d-ad35-853330e3858c", 00:10:08.574 "strip_size_kb": 64, 00:10:08.574 "state": "online", 00:10:08.574 "raid_level": "raid0", 00:10:08.574 "superblock": false, 00:10:08.574 "num_base_bdevs": 4, 00:10:08.574 "num_base_bdevs_discovered": 4, 00:10:08.574 "num_base_bdevs_operational": 4, 00:10:08.574 "base_bdevs_list": [ 00:10:08.574 { 00:10:08.574 "name": "BaseBdev1", 00:10:08.574 "uuid": "2f1b8a03-3ed6-4885-8b3d-05361594ef23", 00:10:08.574 "is_configured": true, 00:10:08.574 "data_offset": 0, 00:10:08.574 "data_size": 65536 00:10:08.574 }, 00:10:08.574 { 00:10:08.574 "name": "BaseBdev2", 00:10:08.574 "uuid": "443aa9a9-a0da-4d01-b333-7c83be4673e0", 00:10:08.574 "is_configured": true, 00:10:08.574 "data_offset": 0, 00:10:08.574 "data_size": 65536 00:10:08.574 }, 00:10:08.574 { 00:10:08.574 "name": "BaseBdev3", 00:10:08.574 "uuid": "8c553845-6b70-40e2-95c9-5770cd59a930", 00:10:08.574 "is_configured": true, 00:10:08.574 "data_offset": 0, 00:10:08.574 "data_size": 65536 00:10:08.574 }, 00:10:08.574 { 00:10:08.574 "name": "BaseBdev4", 00:10:08.574 "uuid": "25d2aaf2-d88a-43c9-a09d-b1955e17a3a5", 00:10:08.574 "is_configured": true, 00:10:08.574 "data_offset": 0, 00:10:08.574 "data_size": 65536 00:10:08.574 } 00:10:08.574 ] 00:10:08.574 } 00:10:08.574 } 00:10:08.574 }' 00:10:08.574 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.574 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:08.574 BaseBdev2 00:10:08.574 BaseBdev3 00:10:08.574 BaseBdev4' 00:10:08.574 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.835 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.835 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.835 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:08.835 11:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.835 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.835 11:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.835 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.835 [2024-11-27 11:48:35.184437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.835 [2024-11-27 11:48:35.184519] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.835 [2024-11-27 11:48:35.184604] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.095 "name": "Existed_Raid", 00:10:09.095 "uuid": "cbc19c5f-dfe4-4a7d-ad35-853330e3858c", 00:10:09.095 "strip_size_kb": 64, 00:10:09.095 "state": "offline", 00:10:09.095 "raid_level": "raid0", 00:10:09.095 "superblock": false, 00:10:09.095 "num_base_bdevs": 4, 00:10:09.095 "num_base_bdevs_discovered": 3, 00:10:09.095 "num_base_bdevs_operational": 3, 00:10:09.095 "base_bdevs_list": [ 00:10:09.095 { 00:10:09.095 "name": null, 00:10:09.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.095 "is_configured": false, 00:10:09.095 "data_offset": 0, 00:10:09.095 "data_size": 65536 00:10:09.095 }, 00:10:09.095 { 00:10:09.095 "name": "BaseBdev2", 00:10:09.095 "uuid": "443aa9a9-a0da-4d01-b333-7c83be4673e0", 00:10:09.095 "is_configured": true, 00:10:09.095 "data_offset": 0, 00:10:09.095 "data_size": 65536 00:10:09.095 }, 00:10:09.095 { 00:10:09.095 "name": "BaseBdev3", 00:10:09.095 "uuid": "8c553845-6b70-40e2-95c9-5770cd59a930", 00:10:09.095 "is_configured": true, 00:10:09.095 "data_offset": 0, 00:10:09.095 "data_size": 65536 00:10:09.095 }, 00:10:09.095 { 00:10:09.095 "name": "BaseBdev4", 00:10:09.095 "uuid": "25d2aaf2-d88a-43c9-a09d-b1955e17a3a5", 00:10:09.095 "is_configured": true, 00:10:09.095 "data_offset": 0, 00:10:09.095 "data_size": 65536 00:10:09.095 } 00:10:09.095 ] 00:10:09.095 }' 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.095 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.355 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:09.355 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.355 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.355 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.355 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.355 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.355 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.615 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.615 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.615 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:09.615 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.615 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.615 [2024-11-27 11:48:35.756163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:09.615 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.615 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.615 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.615 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.615 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.615 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.615 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.615 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.615 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.616 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.616 11:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:09.616 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.616 11:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.616 [2024-11-27 11:48:35.912594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:09.875 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.875 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.875 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.875 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.875 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.875 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.875 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.875 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.875 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.876 [2024-11-27 11:48:36.076661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:09.876 [2024-11-27 11:48:36.076761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.876 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.136 BaseBdev2 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.136 [ 00:10:10.136 { 00:10:10.136 "name": "BaseBdev2", 00:10:10.136 "aliases": [ 00:10:10.136 "ce69e6cf-24be-462b-b01a-0e39d5ba9a9c" 00:10:10.136 ], 00:10:10.136 "product_name": "Malloc disk", 00:10:10.136 "block_size": 512, 00:10:10.136 "num_blocks": 65536, 00:10:10.136 "uuid": "ce69e6cf-24be-462b-b01a-0e39d5ba9a9c", 00:10:10.136 "assigned_rate_limits": { 00:10:10.136 "rw_ios_per_sec": 0, 00:10:10.136 "rw_mbytes_per_sec": 0, 00:10:10.136 "r_mbytes_per_sec": 0, 00:10:10.136 "w_mbytes_per_sec": 0 00:10:10.136 }, 00:10:10.136 "claimed": false, 00:10:10.136 "zoned": false, 00:10:10.136 "supported_io_types": { 00:10:10.136 "read": true, 00:10:10.136 "write": true, 00:10:10.136 "unmap": true, 00:10:10.136 "flush": true, 00:10:10.136 "reset": true, 00:10:10.136 "nvme_admin": false, 00:10:10.136 "nvme_io": false, 00:10:10.136 "nvme_io_md": false, 00:10:10.136 "write_zeroes": true, 00:10:10.136 "zcopy": true, 00:10:10.136 "get_zone_info": false, 00:10:10.136 "zone_management": false, 00:10:10.136 "zone_append": false, 00:10:10.136 "compare": false, 00:10:10.136 "compare_and_write": false, 00:10:10.136 "abort": true, 00:10:10.136 "seek_hole": false, 00:10:10.136 "seek_data": false, 00:10:10.136 "copy": true, 00:10:10.136 "nvme_iov_md": false 00:10:10.136 }, 00:10:10.136 "memory_domains": [ 00:10:10.136 { 00:10:10.136 "dma_device_id": "system", 00:10:10.136 "dma_device_type": 1 00:10:10.136 }, 00:10:10.136 { 00:10:10.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.136 "dma_device_type": 2 00:10:10.136 } 00:10:10.136 ], 00:10:10.136 "driver_specific": {} 00:10:10.136 } 00:10:10.136 ] 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.136 BaseBdev3 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.136 [ 00:10:10.136 { 00:10:10.136 "name": "BaseBdev3", 00:10:10.136 "aliases": [ 00:10:10.136 "6a803890-ceb8-44b2-af19-2874f1a7e13f" 00:10:10.136 ], 00:10:10.136 "product_name": "Malloc disk", 00:10:10.136 "block_size": 512, 00:10:10.136 "num_blocks": 65536, 00:10:10.136 "uuid": "6a803890-ceb8-44b2-af19-2874f1a7e13f", 00:10:10.136 "assigned_rate_limits": { 00:10:10.136 "rw_ios_per_sec": 0, 00:10:10.136 "rw_mbytes_per_sec": 0, 00:10:10.136 "r_mbytes_per_sec": 0, 00:10:10.136 "w_mbytes_per_sec": 0 00:10:10.136 }, 00:10:10.136 "claimed": false, 00:10:10.136 "zoned": false, 00:10:10.136 "supported_io_types": { 00:10:10.136 "read": true, 00:10:10.136 "write": true, 00:10:10.136 "unmap": true, 00:10:10.136 "flush": true, 00:10:10.136 "reset": true, 00:10:10.136 "nvme_admin": false, 00:10:10.136 "nvme_io": false, 00:10:10.136 "nvme_io_md": false, 00:10:10.136 "write_zeroes": true, 00:10:10.136 "zcopy": true, 00:10:10.136 "get_zone_info": false, 00:10:10.136 "zone_management": false, 00:10:10.136 "zone_append": false, 00:10:10.136 "compare": false, 00:10:10.136 "compare_and_write": false, 00:10:10.136 "abort": true, 00:10:10.136 "seek_hole": false, 00:10:10.136 "seek_data": false, 00:10:10.136 "copy": true, 00:10:10.136 "nvme_iov_md": false 00:10:10.136 }, 00:10:10.136 "memory_domains": [ 00:10:10.136 { 00:10:10.136 "dma_device_id": "system", 00:10:10.136 "dma_device_type": 1 00:10:10.136 }, 00:10:10.136 { 00:10:10.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.136 "dma_device_type": 2 00:10:10.136 } 00:10:10.136 ], 00:10:10.136 "driver_specific": {} 00:10:10.136 } 00:10:10.136 ] 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.136 BaseBdev4 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.136 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.137 [ 00:10:10.137 { 00:10:10.137 "name": "BaseBdev4", 00:10:10.137 "aliases": [ 00:10:10.137 "6ad9e6a6-b6a8-404a-8bf2-50d18ee025ec" 00:10:10.137 ], 00:10:10.137 "product_name": "Malloc disk", 00:10:10.137 "block_size": 512, 00:10:10.137 "num_blocks": 65536, 00:10:10.137 "uuid": "6ad9e6a6-b6a8-404a-8bf2-50d18ee025ec", 00:10:10.137 "assigned_rate_limits": { 00:10:10.137 "rw_ios_per_sec": 0, 00:10:10.137 "rw_mbytes_per_sec": 0, 00:10:10.137 "r_mbytes_per_sec": 0, 00:10:10.137 "w_mbytes_per_sec": 0 00:10:10.137 }, 00:10:10.137 "claimed": false, 00:10:10.137 "zoned": false, 00:10:10.137 "supported_io_types": { 00:10:10.137 "read": true, 00:10:10.137 "write": true, 00:10:10.137 "unmap": true, 00:10:10.137 "flush": true, 00:10:10.137 "reset": true, 00:10:10.137 "nvme_admin": false, 00:10:10.137 "nvme_io": false, 00:10:10.137 "nvme_io_md": false, 00:10:10.137 "write_zeroes": true, 00:10:10.137 "zcopy": true, 00:10:10.137 "get_zone_info": false, 00:10:10.137 "zone_management": false, 00:10:10.137 "zone_append": false, 00:10:10.137 "compare": false, 00:10:10.137 "compare_and_write": false, 00:10:10.137 "abort": true, 00:10:10.137 "seek_hole": false, 00:10:10.137 "seek_data": false, 00:10:10.137 "copy": true, 00:10:10.137 "nvme_iov_md": false 00:10:10.137 }, 00:10:10.137 "memory_domains": [ 00:10:10.137 { 00:10:10.137 "dma_device_id": "system", 00:10:10.137 "dma_device_type": 1 00:10:10.137 }, 00:10:10.137 { 00:10:10.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.137 "dma_device_type": 2 00:10:10.137 } 00:10:10.137 ], 00:10:10.137 "driver_specific": {} 00:10:10.137 } 00:10:10.137 ] 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.137 [2024-11-27 11:48:36.480275] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.137 [2024-11-27 11:48:36.480382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.137 [2024-11-27 11:48:36.480436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.137 [2024-11-27 11:48:36.482616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.137 [2024-11-27 11:48:36.482712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.137 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.396 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.396 "name": "Existed_Raid", 00:10:10.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.396 "strip_size_kb": 64, 00:10:10.396 "state": "configuring", 00:10:10.396 "raid_level": "raid0", 00:10:10.396 "superblock": false, 00:10:10.396 "num_base_bdevs": 4, 00:10:10.396 "num_base_bdevs_discovered": 3, 00:10:10.396 "num_base_bdevs_operational": 4, 00:10:10.396 "base_bdevs_list": [ 00:10:10.396 { 00:10:10.396 "name": "BaseBdev1", 00:10:10.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.396 "is_configured": false, 00:10:10.396 "data_offset": 0, 00:10:10.396 "data_size": 0 00:10:10.396 }, 00:10:10.396 { 00:10:10.396 "name": "BaseBdev2", 00:10:10.396 "uuid": "ce69e6cf-24be-462b-b01a-0e39d5ba9a9c", 00:10:10.396 "is_configured": true, 00:10:10.396 "data_offset": 0, 00:10:10.397 "data_size": 65536 00:10:10.397 }, 00:10:10.397 { 00:10:10.397 "name": "BaseBdev3", 00:10:10.397 "uuid": "6a803890-ceb8-44b2-af19-2874f1a7e13f", 00:10:10.397 "is_configured": true, 00:10:10.397 "data_offset": 0, 00:10:10.397 "data_size": 65536 00:10:10.397 }, 00:10:10.397 { 00:10:10.397 "name": "BaseBdev4", 00:10:10.397 "uuid": "6ad9e6a6-b6a8-404a-8bf2-50d18ee025ec", 00:10:10.397 "is_configured": true, 00:10:10.397 "data_offset": 0, 00:10:10.397 "data_size": 65536 00:10:10.397 } 00:10:10.397 ] 00:10:10.397 }' 00:10:10.397 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.397 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.657 [2024-11-27 11:48:36.963526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.657 11:48:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.657 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.657 "name": "Existed_Raid", 00:10:10.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.657 "strip_size_kb": 64, 00:10:10.657 "state": "configuring", 00:10:10.657 "raid_level": "raid0", 00:10:10.657 "superblock": false, 00:10:10.657 "num_base_bdevs": 4, 00:10:10.657 "num_base_bdevs_discovered": 2, 00:10:10.657 "num_base_bdevs_operational": 4, 00:10:10.657 "base_bdevs_list": [ 00:10:10.657 { 00:10:10.657 "name": "BaseBdev1", 00:10:10.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.657 "is_configured": false, 00:10:10.657 "data_offset": 0, 00:10:10.657 "data_size": 0 00:10:10.657 }, 00:10:10.657 { 00:10:10.657 "name": null, 00:10:10.657 "uuid": "ce69e6cf-24be-462b-b01a-0e39d5ba9a9c", 00:10:10.657 "is_configured": false, 00:10:10.657 "data_offset": 0, 00:10:10.657 "data_size": 65536 00:10:10.657 }, 00:10:10.657 { 00:10:10.657 "name": "BaseBdev3", 00:10:10.657 "uuid": "6a803890-ceb8-44b2-af19-2874f1a7e13f", 00:10:10.657 "is_configured": true, 00:10:10.657 "data_offset": 0, 00:10:10.657 "data_size": 65536 00:10:10.657 }, 00:10:10.657 { 00:10:10.657 "name": "BaseBdev4", 00:10:10.657 "uuid": "6ad9e6a6-b6a8-404a-8bf2-50d18ee025ec", 00:10:10.657 "is_configured": true, 00:10:10.657 "data_offset": 0, 00:10:10.657 "data_size": 65536 00:10:10.657 } 00:10:10.657 ] 00:10:10.657 }' 00:10:10.657 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.657 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.224 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.224 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:11.224 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.224 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.224 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.224 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:11.224 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:11.224 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.224 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.224 [2024-11-27 11:48:37.518137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.224 BaseBdev1 00:10:11.224 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.224 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:11.224 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:11.224 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.224 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.225 [ 00:10:11.225 { 00:10:11.225 "name": "BaseBdev1", 00:10:11.225 "aliases": [ 00:10:11.225 "8138f296-dca2-4a0d-a5dc-92492ab01d1d" 00:10:11.225 ], 00:10:11.225 "product_name": "Malloc disk", 00:10:11.225 "block_size": 512, 00:10:11.225 "num_blocks": 65536, 00:10:11.225 "uuid": "8138f296-dca2-4a0d-a5dc-92492ab01d1d", 00:10:11.225 "assigned_rate_limits": { 00:10:11.225 "rw_ios_per_sec": 0, 00:10:11.225 "rw_mbytes_per_sec": 0, 00:10:11.225 "r_mbytes_per_sec": 0, 00:10:11.225 "w_mbytes_per_sec": 0 00:10:11.225 }, 00:10:11.225 "claimed": true, 00:10:11.225 "claim_type": "exclusive_write", 00:10:11.225 "zoned": false, 00:10:11.225 "supported_io_types": { 00:10:11.225 "read": true, 00:10:11.225 "write": true, 00:10:11.225 "unmap": true, 00:10:11.225 "flush": true, 00:10:11.225 "reset": true, 00:10:11.225 "nvme_admin": false, 00:10:11.225 "nvme_io": false, 00:10:11.225 "nvme_io_md": false, 00:10:11.225 "write_zeroes": true, 00:10:11.225 "zcopy": true, 00:10:11.225 "get_zone_info": false, 00:10:11.225 "zone_management": false, 00:10:11.225 "zone_append": false, 00:10:11.225 "compare": false, 00:10:11.225 "compare_and_write": false, 00:10:11.225 "abort": true, 00:10:11.225 "seek_hole": false, 00:10:11.225 "seek_data": false, 00:10:11.225 "copy": true, 00:10:11.225 "nvme_iov_md": false 00:10:11.225 }, 00:10:11.225 "memory_domains": [ 00:10:11.225 { 00:10:11.225 "dma_device_id": "system", 00:10:11.225 "dma_device_type": 1 00:10:11.225 }, 00:10:11.225 { 00:10:11.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.225 "dma_device_type": 2 00:10:11.225 } 00:10:11.225 ], 00:10:11.225 "driver_specific": {} 00:10:11.225 } 00:10:11.225 ] 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.225 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.484 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.484 "name": "Existed_Raid", 00:10:11.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.484 "strip_size_kb": 64, 00:10:11.484 "state": "configuring", 00:10:11.484 "raid_level": "raid0", 00:10:11.484 "superblock": false, 00:10:11.484 "num_base_bdevs": 4, 00:10:11.484 "num_base_bdevs_discovered": 3, 00:10:11.484 "num_base_bdevs_operational": 4, 00:10:11.484 "base_bdevs_list": [ 00:10:11.484 { 00:10:11.484 "name": "BaseBdev1", 00:10:11.484 "uuid": "8138f296-dca2-4a0d-a5dc-92492ab01d1d", 00:10:11.484 "is_configured": true, 00:10:11.484 "data_offset": 0, 00:10:11.484 "data_size": 65536 00:10:11.484 }, 00:10:11.484 { 00:10:11.484 "name": null, 00:10:11.484 "uuid": "ce69e6cf-24be-462b-b01a-0e39d5ba9a9c", 00:10:11.484 "is_configured": false, 00:10:11.484 "data_offset": 0, 00:10:11.484 "data_size": 65536 00:10:11.484 }, 00:10:11.484 { 00:10:11.484 "name": "BaseBdev3", 00:10:11.484 "uuid": "6a803890-ceb8-44b2-af19-2874f1a7e13f", 00:10:11.484 "is_configured": true, 00:10:11.484 "data_offset": 0, 00:10:11.484 "data_size": 65536 00:10:11.484 }, 00:10:11.484 { 00:10:11.484 "name": "BaseBdev4", 00:10:11.484 "uuid": "6ad9e6a6-b6a8-404a-8bf2-50d18ee025ec", 00:10:11.484 "is_configured": true, 00:10:11.484 "data_offset": 0, 00:10:11.484 "data_size": 65536 00:10:11.484 } 00:10:11.484 ] 00:10:11.484 }' 00:10:11.484 11:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.484 11:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.744 [2024-11-27 11:48:38.089261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.744 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.004 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.004 "name": "Existed_Raid", 00:10:12.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.004 "strip_size_kb": 64, 00:10:12.004 "state": "configuring", 00:10:12.004 "raid_level": "raid0", 00:10:12.004 "superblock": false, 00:10:12.004 "num_base_bdevs": 4, 00:10:12.004 "num_base_bdevs_discovered": 2, 00:10:12.004 "num_base_bdevs_operational": 4, 00:10:12.004 "base_bdevs_list": [ 00:10:12.004 { 00:10:12.004 "name": "BaseBdev1", 00:10:12.004 "uuid": "8138f296-dca2-4a0d-a5dc-92492ab01d1d", 00:10:12.004 "is_configured": true, 00:10:12.004 "data_offset": 0, 00:10:12.004 "data_size": 65536 00:10:12.004 }, 00:10:12.004 { 00:10:12.004 "name": null, 00:10:12.004 "uuid": "ce69e6cf-24be-462b-b01a-0e39d5ba9a9c", 00:10:12.004 "is_configured": false, 00:10:12.004 "data_offset": 0, 00:10:12.004 "data_size": 65536 00:10:12.004 }, 00:10:12.004 { 00:10:12.004 "name": null, 00:10:12.004 "uuid": "6a803890-ceb8-44b2-af19-2874f1a7e13f", 00:10:12.004 "is_configured": false, 00:10:12.004 "data_offset": 0, 00:10:12.004 "data_size": 65536 00:10:12.004 }, 00:10:12.004 { 00:10:12.004 "name": "BaseBdev4", 00:10:12.004 "uuid": "6ad9e6a6-b6a8-404a-8bf2-50d18ee025ec", 00:10:12.004 "is_configured": true, 00:10:12.004 "data_offset": 0, 00:10:12.004 "data_size": 65536 00:10:12.004 } 00:10:12.004 ] 00:10:12.004 }' 00:10:12.004 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.004 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.262 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:12.262 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.262 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.262 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.262 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.262 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:12.262 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:12.262 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.262 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.522 [2024-11-27 11:48:38.648306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.522 "name": "Existed_Raid", 00:10:12.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.522 "strip_size_kb": 64, 00:10:12.522 "state": "configuring", 00:10:12.522 "raid_level": "raid0", 00:10:12.522 "superblock": false, 00:10:12.522 "num_base_bdevs": 4, 00:10:12.522 "num_base_bdevs_discovered": 3, 00:10:12.522 "num_base_bdevs_operational": 4, 00:10:12.522 "base_bdevs_list": [ 00:10:12.522 { 00:10:12.522 "name": "BaseBdev1", 00:10:12.522 "uuid": "8138f296-dca2-4a0d-a5dc-92492ab01d1d", 00:10:12.522 "is_configured": true, 00:10:12.522 "data_offset": 0, 00:10:12.522 "data_size": 65536 00:10:12.522 }, 00:10:12.522 { 00:10:12.522 "name": null, 00:10:12.522 "uuid": "ce69e6cf-24be-462b-b01a-0e39d5ba9a9c", 00:10:12.522 "is_configured": false, 00:10:12.522 "data_offset": 0, 00:10:12.522 "data_size": 65536 00:10:12.522 }, 00:10:12.522 { 00:10:12.522 "name": "BaseBdev3", 00:10:12.522 "uuid": "6a803890-ceb8-44b2-af19-2874f1a7e13f", 00:10:12.522 "is_configured": true, 00:10:12.522 "data_offset": 0, 00:10:12.522 "data_size": 65536 00:10:12.522 }, 00:10:12.522 { 00:10:12.522 "name": "BaseBdev4", 00:10:12.522 "uuid": "6ad9e6a6-b6a8-404a-8bf2-50d18ee025ec", 00:10:12.522 "is_configured": true, 00:10:12.522 "data_offset": 0, 00:10:12.522 "data_size": 65536 00:10:12.522 } 00:10:12.522 ] 00:10:12.522 }' 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.522 11:48:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.780 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.780 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:12.780 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.780 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.780 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.780 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:12.780 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:12.780 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.780 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.780 [2024-11-27 11:48:39.143653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:13.039 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.039 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:13.039 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.039 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.039 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.039 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.039 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.039 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.039 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.039 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.039 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.039 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.039 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.039 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.039 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.039 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.039 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.039 "name": "Existed_Raid", 00:10:13.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.039 "strip_size_kb": 64, 00:10:13.040 "state": "configuring", 00:10:13.040 "raid_level": "raid0", 00:10:13.040 "superblock": false, 00:10:13.040 "num_base_bdevs": 4, 00:10:13.040 "num_base_bdevs_discovered": 2, 00:10:13.040 "num_base_bdevs_operational": 4, 00:10:13.040 "base_bdevs_list": [ 00:10:13.040 { 00:10:13.040 "name": null, 00:10:13.040 "uuid": "8138f296-dca2-4a0d-a5dc-92492ab01d1d", 00:10:13.040 "is_configured": false, 00:10:13.040 "data_offset": 0, 00:10:13.040 "data_size": 65536 00:10:13.040 }, 00:10:13.040 { 00:10:13.040 "name": null, 00:10:13.040 "uuid": "ce69e6cf-24be-462b-b01a-0e39d5ba9a9c", 00:10:13.040 "is_configured": false, 00:10:13.040 "data_offset": 0, 00:10:13.040 "data_size": 65536 00:10:13.040 }, 00:10:13.040 { 00:10:13.040 "name": "BaseBdev3", 00:10:13.040 "uuid": "6a803890-ceb8-44b2-af19-2874f1a7e13f", 00:10:13.040 "is_configured": true, 00:10:13.040 "data_offset": 0, 00:10:13.040 "data_size": 65536 00:10:13.040 }, 00:10:13.040 { 00:10:13.040 "name": "BaseBdev4", 00:10:13.040 "uuid": "6ad9e6a6-b6a8-404a-8bf2-50d18ee025ec", 00:10:13.040 "is_configured": true, 00:10:13.040 "data_offset": 0, 00:10:13.040 "data_size": 65536 00:10:13.040 } 00:10:13.040 ] 00:10:13.040 }' 00:10:13.040 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.040 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.298 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.298 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.298 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.298 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:13.298 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.559 [2024-11-27 11:48:39.717972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.559 "name": "Existed_Raid", 00:10:13.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.559 "strip_size_kb": 64, 00:10:13.559 "state": "configuring", 00:10:13.559 "raid_level": "raid0", 00:10:13.559 "superblock": false, 00:10:13.559 "num_base_bdevs": 4, 00:10:13.559 "num_base_bdevs_discovered": 3, 00:10:13.559 "num_base_bdevs_operational": 4, 00:10:13.559 "base_bdevs_list": [ 00:10:13.559 { 00:10:13.559 "name": null, 00:10:13.559 "uuid": "8138f296-dca2-4a0d-a5dc-92492ab01d1d", 00:10:13.559 "is_configured": false, 00:10:13.559 "data_offset": 0, 00:10:13.559 "data_size": 65536 00:10:13.559 }, 00:10:13.559 { 00:10:13.559 "name": "BaseBdev2", 00:10:13.559 "uuid": "ce69e6cf-24be-462b-b01a-0e39d5ba9a9c", 00:10:13.559 "is_configured": true, 00:10:13.559 "data_offset": 0, 00:10:13.559 "data_size": 65536 00:10:13.559 }, 00:10:13.559 { 00:10:13.559 "name": "BaseBdev3", 00:10:13.559 "uuid": "6a803890-ceb8-44b2-af19-2874f1a7e13f", 00:10:13.559 "is_configured": true, 00:10:13.559 "data_offset": 0, 00:10:13.559 "data_size": 65536 00:10:13.559 }, 00:10:13.559 { 00:10:13.559 "name": "BaseBdev4", 00:10:13.559 "uuid": "6ad9e6a6-b6a8-404a-8bf2-50d18ee025ec", 00:10:13.559 "is_configured": true, 00:10:13.559 "data_offset": 0, 00:10:13.559 "data_size": 65536 00:10:13.559 } 00:10:13.559 ] 00:10:13.559 }' 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.559 11:48:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.819 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:13.819 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.819 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.819 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.819 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.819 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:13.819 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:13.819 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.819 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.819 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8138f296-dca2-4a0d-a5dc-92492ab01d1d 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.080 [2024-11-27 11:48:40.257306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:14.080 [2024-11-27 11:48:40.257418] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:14.080 [2024-11-27 11:48:40.257448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:14.080 [2024-11-27 11:48:40.257781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:14.080 [2024-11-27 11:48:40.258007] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:14.080 [2024-11-27 11:48:40.258055] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:14.080 [2024-11-27 11:48:40.258370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.080 NewBaseBdev 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.080 [ 00:10:14.080 { 00:10:14.080 "name": "NewBaseBdev", 00:10:14.080 "aliases": [ 00:10:14.080 "8138f296-dca2-4a0d-a5dc-92492ab01d1d" 00:10:14.080 ], 00:10:14.080 "product_name": "Malloc disk", 00:10:14.080 "block_size": 512, 00:10:14.080 "num_blocks": 65536, 00:10:14.080 "uuid": "8138f296-dca2-4a0d-a5dc-92492ab01d1d", 00:10:14.080 "assigned_rate_limits": { 00:10:14.080 "rw_ios_per_sec": 0, 00:10:14.080 "rw_mbytes_per_sec": 0, 00:10:14.080 "r_mbytes_per_sec": 0, 00:10:14.080 "w_mbytes_per_sec": 0 00:10:14.080 }, 00:10:14.080 "claimed": true, 00:10:14.080 "claim_type": "exclusive_write", 00:10:14.080 "zoned": false, 00:10:14.080 "supported_io_types": { 00:10:14.080 "read": true, 00:10:14.080 "write": true, 00:10:14.080 "unmap": true, 00:10:14.080 "flush": true, 00:10:14.080 "reset": true, 00:10:14.080 "nvme_admin": false, 00:10:14.080 "nvme_io": false, 00:10:14.080 "nvme_io_md": false, 00:10:14.080 "write_zeroes": true, 00:10:14.080 "zcopy": true, 00:10:14.080 "get_zone_info": false, 00:10:14.080 "zone_management": false, 00:10:14.080 "zone_append": false, 00:10:14.080 "compare": false, 00:10:14.080 "compare_and_write": false, 00:10:14.080 "abort": true, 00:10:14.080 "seek_hole": false, 00:10:14.080 "seek_data": false, 00:10:14.080 "copy": true, 00:10:14.080 "nvme_iov_md": false 00:10:14.080 }, 00:10:14.080 "memory_domains": [ 00:10:14.080 { 00:10:14.080 "dma_device_id": "system", 00:10:14.080 "dma_device_type": 1 00:10:14.080 }, 00:10:14.080 { 00:10:14.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.080 "dma_device_type": 2 00:10:14.080 } 00:10:14.080 ], 00:10:14.080 "driver_specific": {} 00:10:14.080 } 00:10:14.080 ] 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.080 "name": "Existed_Raid", 00:10:14.080 "uuid": "3990060a-3d06-4a41-9685-a7c580e55ed1", 00:10:14.080 "strip_size_kb": 64, 00:10:14.080 "state": "online", 00:10:14.080 "raid_level": "raid0", 00:10:14.080 "superblock": false, 00:10:14.080 "num_base_bdevs": 4, 00:10:14.080 "num_base_bdevs_discovered": 4, 00:10:14.080 "num_base_bdevs_operational": 4, 00:10:14.080 "base_bdevs_list": [ 00:10:14.080 { 00:10:14.080 "name": "NewBaseBdev", 00:10:14.080 "uuid": "8138f296-dca2-4a0d-a5dc-92492ab01d1d", 00:10:14.080 "is_configured": true, 00:10:14.080 "data_offset": 0, 00:10:14.080 "data_size": 65536 00:10:14.080 }, 00:10:14.080 { 00:10:14.080 "name": "BaseBdev2", 00:10:14.080 "uuid": "ce69e6cf-24be-462b-b01a-0e39d5ba9a9c", 00:10:14.080 "is_configured": true, 00:10:14.080 "data_offset": 0, 00:10:14.080 "data_size": 65536 00:10:14.080 }, 00:10:14.080 { 00:10:14.080 "name": "BaseBdev3", 00:10:14.080 "uuid": "6a803890-ceb8-44b2-af19-2874f1a7e13f", 00:10:14.080 "is_configured": true, 00:10:14.080 "data_offset": 0, 00:10:14.080 "data_size": 65536 00:10:14.080 }, 00:10:14.080 { 00:10:14.080 "name": "BaseBdev4", 00:10:14.080 "uuid": "6ad9e6a6-b6a8-404a-8bf2-50d18ee025ec", 00:10:14.080 "is_configured": true, 00:10:14.080 "data_offset": 0, 00:10:14.080 "data_size": 65536 00:10:14.080 } 00:10:14.080 ] 00:10:14.080 }' 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.080 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:14.651 [2024-11-27 11:48:40.764931] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:14.651 "name": "Existed_Raid", 00:10:14.651 "aliases": [ 00:10:14.651 "3990060a-3d06-4a41-9685-a7c580e55ed1" 00:10:14.651 ], 00:10:14.651 "product_name": "Raid Volume", 00:10:14.651 "block_size": 512, 00:10:14.651 "num_blocks": 262144, 00:10:14.651 "uuid": "3990060a-3d06-4a41-9685-a7c580e55ed1", 00:10:14.651 "assigned_rate_limits": { 00:10:14.651 "rw_ios_per_sec": 0, 00:10:14.651 "rw_mbytes_per_sec": 0, 00:10:14.651 "r_mbytes_per_sec": 0, 00:10:14.651 "w_mbytes_per_sec": 0 00:10:14.651 }, 00:10:14.651 "claimed": false, 00:10:14.651 "zoned": false, 00:10:14.651 "supported_io_types": { 00:10:14.651 "read": true, 00:10:14.651 "write": true, 00:10:14.651 "unmap": true, 00:10:14.651 "flush": true, 00:10:14.651 "reset": true, 00:10:14.651 "nvme_admin": false, 00:10:14.651 "nvme_io": false, 00:10:14.651 "nvme_io_md": false, 00:10:14.651 "write_zeroes": true, 00:10:14.651 "zcopy": false, 00:10:14.651 "get_zone_info": false, 00:10:14.651 "zone_management": false, 00:10:14.651 "zone_append": false, 00:10:14.651 "compare": false, 00:10:14.651 "compare_and_write": false, 00:10:14.651 "abort": false, 00:10:14.651 "seek_hole": false, 00:10:14.651 "seek_data": false, 00:10:14.651 "copy": false, 00:10:14.651 "nvme_iov_md": false 00:10:14.651 }, 00:10:14.651 "memory_domains": [ 00:10:14.651 { 00:10:14.651 "dma_device_id": "system", 00:10:14.651 "dma_device_type": 1 00:10:14.651 }, 00:10:14.651 { 00:10:14.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.651 "dma_device_type": 2 00:10:14.651 }, 00:10:14.651 { 00:10:14.651 "dma_device_id": "system", 00:10:14.651 "dma_device_type": 1 00:10:14.651 }, 00:10:14.651 { 00:10:14.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.651 "dma_device_type": 2 00:10:14.651 }, 00:10:14.651 { 00:10:14.651 "dma_device_id": "system", 00:10:14.651 "dma_device_type": 1 00:10:14.651 }, 00:10:14.651 { 00:10:14.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.651 "dma_device_type": 2 00:10:14.651 }, 00:10:14.651 { 00:10:14.651 "dma_device_id": "system", 00:10:14.651 "dma_device_type": 1 00:10:14.651 }, 00:10:14.651 { 00:10:14.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.651 "dma_device_type": 2 00:10:14.651 } 00:10:14.651 ], 00:10:14.651 "driver_specific": { 00:10:14.651 "raid": { 00:10:14.651 "uuid": "3990060a-3d06-4a41-9685-a7c580e55ed1", 00:10:14.651 "strip_size_kb": 64, 00:10:14.651 "state": "online", 00:10:14.651 "raid_level": "raid0", 00:10:14.651 "superblock": false, 00:10:14.651 "num_base_bdevs": 4, 00:10:14.651 "num_base_bdevs_discovered": 4, 00:10:14.651 "num_base_bdevs_operational": 4, 00:10:14.651 "base_bdevs_list": [ 00:10:14.651 { 00:10:14.651 "name": "NewBaseBdev", 00:10:14.651 "uuid": "8138f296-dca2-4a0d-a5dc-92492ab01d1d", 00:10:14.651 "is_configured": true, 00:10:14.651 "data_offset": 0, 00:10:14.651 "data_size": 65536 00:10:14.651 }, 00:10:14.651 { 00:10:14.651 "name": "BaseBdev2", 00:10:14.651 "uuid": "ce69e6cf-24be-462b-b01a-0e39d5ba9a9c", 00:10:14.651 "is_configured": true, 00:10:14.651 "data_offset": 0, 00:10:14.651 "data_size": 65536 00:10:14.651 }, 00:10:14.651 { 00:10:14.651 "name": "BaseBdev3", 00:10:14.651 "uuid": "6a803890-ceb8-44b2-af19-2874f1a7e13f", 00:10:14.651 "is_configured": true, 00:10:14.651 "data_offset": 0, 00:10:14.651 "data_size": 65536 00:10:14.651 }, 00:10:14.651 { 00:10:14.651 "name": "BaseBdev4", 00:10:14.651 "uuid": "6ad9e6a6-b6a8-404a-8bf2-50d18ee025ec", 00:10:14.651 "is_configured": true, 00:10:14.651 "data_offset": 0, 00:10:14.651 "data_size": 65536 00:10:14.651 } 00:10:14.651 ] 00:10:14.651 } 00:10:14.651 } 00:10:14.651 }' 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:14.651 BaseBdev2 00:10:14.651 BaseBdev3 00:10:14.651 BaseBdev4' 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.651 11:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.651 11:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.651 11:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.651 11:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.651 11:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:14.651 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.651 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.651 11:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.651 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.911 [2024-11-27 11:48:41.115919] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.911 [2024-11-27 11:48:41.115999] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.911 [2024-11-27 11:48:41.116116] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.911 [2024-11-27 11:48:41.116228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.911 [2024-11-27 11:48:41.116280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69373 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69373 ']' 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69373 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69373 00:10:14.911 killing process with pid 69373 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69373' 00:10:14.911 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69373 00:10:14.911 [2024-11-27 11:48:41.164482] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.912 11:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69373 00:10:15.482 [2024-11-27 11:48:41.589423] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:16.420 11:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:16.420 00:10:16.420 real 0m11.890s 00:10:16.420 user 0m18.866s 00:10:16.420 sys 0m2.046s 00:10:16.420 11:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.420 ************************************ 00:10:16.420 END TEST raid_state_function_test 00:10:16.420 ************************************ 00:10:16.420 11:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.680 11:48:42 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:16.680 11:48:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:16.680 11:48:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.680 11:48:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:16.680 ************************************ 00:10:16.680 START TEST raid_state_function_test_sb 00:10:16.680 ************************************ 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:16.680 Process raid pid: 70051 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70051 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:16.680 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70051' 00:10:16.681 11:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70051 00:10:16.681 11:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70051 ']' 00:10:16.681 11:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.681 11:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.681 11:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.681 11:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.681 11:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.681 [2024-11-27 11:48:42.954233] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:10:16.681 [2024-11-27 11:48:42.954346] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.942 [2024-11-27 11:48:43.129410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.942 [2024-11-27 11:48:43.247821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.205 [2024-11-27 11:48:43.459945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.205 [2024-11-27 11:48:43.459982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.466 [2024-11-27 11:48:43.808810] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.466 [2024-11-27 11:48:43.808948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.466 [2024-11-27 11:48:43.808988] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.466 [2024-11-27 11:48:43.809018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.466 [2024-11-27 11:48:43.809041] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:17.466 [2024-11-27 11:48:43.809114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:17.466 [2024-11-27 11:48:43.809143] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:17.466 [2024-11-27 11:48:43.809170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.466 11:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.727 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.727 "name": "Existed_Raid", 00:10:17.727 "uuid": "f5fd4253-44f6-4dd2-a00e-f93540770812", 00:10:17.727 "strip_size_kb": 64, 00:10:17.727 "state": "configuring", 00:10:17.727 "raid_level": "raid0", 00:10:17.727 "superblock": true, 00:10:17.727 "num_base_bdevs": 4, 00:10:17.727 "num_base_bdevs_discovered": 0, 00:10:17.727 "num_base_bdevs_operational": 4, 00:10:17.727 "base_bdevs_list": [ 00:10:17.727 { 00:10:17.727 "name": "BaseBdev1", 00:10:17.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.727 "is_configured": false, 00:10:17.727 "data_offset": 0, 00:10:17.727 "data_size": 0 00:10:17.727 }, 00:10:17.727 { 00:10:17.727 "name": "BaseBdev2", 00:10:17.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.727 "is_configured": false, 00:10:17.727 "data_offset": 0, 00:10:17.727 "data_size": 0 00:10:17.727 }, 00:10:17.727 { 00:10:17.727 "name": "BaseBdev3", 00:10:17.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.727 "is_configured": false, 00:10:17.727 "data_offset": 0, 00:10:17.727 "data_size": 0 00:10:17.727 }, 00:10:17.727 { 00:10:17.727 "name": "BaseBdev4", 00:10:17.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.727 "is_configured": false, 00:10:17.727 "data_offset": 0, 00:10:17.727 "data_size": 0 00:10:17.727 } 00:10:17.727 ] 00:10:17.727 }' 00:10:17.727 11:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.727 11:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.987 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:17.987 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.987 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.987 [2024-11-27 11:48:44.263952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.987 [2024-11-27 11:48:44.264041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:17.987 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.987 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:17.987 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.987 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.987 [2024-11-27 11:48:44.271934] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.987 [2024-11-27 11:48:44.272015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.987 [2024-11-27 11:48:44.272031] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.987 [2024-11-27 11:48:44.272042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.987 [2024-11-27 11:48:44.272049] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:17.987 [2024-11-27 11:48:44.272059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:17.987 [2024-11-27 11:48:44.272067] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:17.987 [2024-11-27 11:48:44.272076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:17.987 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.987 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:17.987 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.987 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.987 [2024-11-27 11:48:44.316236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.987 BaseBdev1 00:10:17.987 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.987 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:17.987 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.988 [ 00:10:17.988 { 00:10:17.988 "name": "BaseBdev1", 00:10:17.988 "aliases": [ 00:10:17.988 "9e31a4ef-3a96-45f7-85c2-38b96d64d737" 00:10:17.988 ], 00:10:17.988 "product_name": "Malloc disk", 00:10:17.988 "block_size": 512, 00:10:17.988 "num_blocks": 65536, 00:10:17.988 "uuid": "9e31a4ef-3a96-45f7-85c2-38b96d64d737", 00:10:17.988 "assigned_rate_limits": { 00:10:17.988 "rw_ios_per_sec": 0, 00:10:17.988 "rw_mbytes_per_sec": 0, 00:10:17.988 "r_mbytes_per_sec": 0, 00:10:17.988 "w_mbytes_per_sec": 0 00:10:17.988 }, 00:10:17.988 "claimed": true, 00:10:17.988 "claim_type": "exclusive_write", 00:10:17.988 "zoned": false, 00:10:17.988 "supported_io_types": { 00:10:17.988 "read": true, 00:10:17.988 "write": true, 00:10:17.988 "unmap": true, 00:10:17.988 "flush": true, 00:10:17.988 "reset": true, 00:10:17.988 "nvme_admin": false, 00:10:17.988 "nvme_io": false, 00:10:17.988 "nvme_io_md": false, 00:10:17.988 "write_zeroes": true, 00:10:17.988 "zcopy": true, 00:10:17.988 "get_zone_info": false, 00:10:17.988 "zone_management": false, 00:10:17.988 "zone_append": false, 00:10:17.988 "compare": false, 00:10:17.988 "compare_and_write": false, 00:10:17.988 "abort": true, 00:10:17.988 "seek_hole": false, 00:10:17.988 "seek_data": false, 00:10:17.988 "copy": true, 00:10:17.988 "nvme_iov_md": false 00:10:17.988 }, 00:10:17.988 "memory_domains": [ 00:10:17.988 { 00:10:17.988 "dma_device_id": "system", 00:10:17.988 "dma_device_type": 1 00:10:17.988 }, 00:10:17.988 { 00:10:17.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.988 "dma_device_type": 2 00:10:17.988 } 00:10:17.988 ], 00:10:17.988 "driver_specific": {} 00:10:17.988 } 00:10:17.988 ] 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.988 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.248 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.248 "name": "Existed_Raid", 00:10:18.248 "uuid": "ddc14805-932e-4644-adb4-2ec9daea4e74", 00:10:18.248 "strip_size_kb": 64, 00:10:18.248 "state": "configuring", 00:10:18.248 "raid_level": "raid0", 00:10:18.248 "superblock": true, 00:10:18.248 "num_base_bdevs": 4, 00:10:18.248 "num_base_bdevs_discovered": 1, 00:10:18.248 "num_base_bdevs_operational": 4, 00:10:18.248 "base_bdevs_list": [ 00:10:18.248 { 00:10:18.248 "name": "BaseBdev1", 00:10:18.248 "uuid": "9e31a4ef-3a96-45f7-85c2-38b96d64d737", 00:10:18.248 "is_configured": true, 00:10:18.248 "data_offset": 2048, 00:10:18.248 "data_size": 63488 00:10:18.248 }, 00:10:18.248 { 00:10:18.248 "name": "BaseBdev2", 00:10:18.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.248 "is_configured": false, 00:10:18.248 "data_offset": 0, 00:10:18.248 "data_size": 0 00:10:18.248 }, 00:10:18.248 { 00:10:18.248 "name": "BaseBdev3", 00:10:18.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.248 "is_configured": false, 00:10:18.248 "data_offset": 0, 00:10:18.248 "data_size": 0 00:10:18.248 }, 00:10:18.248 { 00:10:18.248 "name": "BaseBdev4", 00:10:18.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.248 "is_configured": false, 00:10:18.248 "data_offset": 0, 00:10:18.248 "data_size": 0 00:10:18.248 } 00:10:18.248 ] 00:10:18.248 }' 00:10:18.248 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.248 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.508 [2024-11-27 11:48:44.795506] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:18.508 [2024-11-27 11:48:44.795615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.508 [2024-11-27 11:48:44.807574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.508 [2024-11-27 11:48:44.809763] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:18.508 [2024-11-27 11:48:44.809874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:18.508 [2024-11-27 11:48:44.809913] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:18.508 [2024-11-27 11:48:44.809945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:18.508 [2024-11-27 11:48:44.809969] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:18.508 [2024-11-27 11:48:44.809994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.508 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.508 "name": "Existed_Raid", 00:10:18.508 "uuid": "0f6bc8ad-05bb-499b-9c3c-dd540baaa0cd", 00:10:18.508 "strip_size_kb": 64, 00:10:18.508 "state": "configuring", 00:10:18.508 "raid_level": "raid0", 00:10:18.508 "superblock": true, 00:10:18.508 "num_base_bdevs": 4, 00:10:18.508 "num_base_bdevs_discovered": 1, 00:10:18.508 "num_base_bdevs_operational": 4, 00:10:18.508 "base_bdevs_list": [ 00:10:18.508 { 00:10:18.508 "name": "BaseBdev1", 00:10:18.508 "uuid": "9e31a4ef-3a96-45f7-85c2-38b96d64d737", 00:10:18.508 "is_configured": true, 00:10:18.508 "data_offset": 2048, 00:10:18.508 "data_size": 63488 00:10:18.508 }, 00:10:18.508 { 00:10:18.508 "name": "BaseBdev2", 00:10:18.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.508 "is_configured": false, 00:10:18.508 "data_offset": 0, 00:10:18.508 "data_size": 0 00:10:18.508 }, 00:10:18.508 { 00:10:18.508 "name": "BaseBdev3", 00:10:18.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.508 "is_configured": false, 00:10:18.508 "data_offset": 0, 00:10:18.508 "data_size": 0 00:10:18.508 }, 00:10:18.508 { 00:10:18.508 "name": "BaseBdev4", 00:10:18.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.509 "is_configured": false, 00:10:18.509 "data_offset": 0, 00:10:18.509 "data_size": 0 00:10:18.509 } 00:10:18.509 ] 00:10:18.509 }' 00:10:18.509 11:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.509 11:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.079 [2024-11-27 11:48:45.303026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.079 BaseBdev2 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.079 [ 00:10:19.079 { 00:10:19.079 "name": "BaseBdev2", 00:10:19.079 "aliases": [ 00:10:19.079 "2bc7c69e-e03f-427a-91ba-360da97a10fc" 00:10:19.079 ], 00:10:19.079 "product_name": "Malloc disk", 00:10:19.079 "block_size": 512, 00:10:19.079 "num_blocks": 65536, 00:10:19.079 "uuid": "2bc7c69e-e03f-427a-91ba-360da97a10fc", 00:10:19.079 "assigned_rate_limits": { 00:10:19.079 "rw_ios_per_sec": 0, 00:10:19.079 "rw_mbytes_per_sec": 0, 00:10:19.079 "r_mbytes_per_sec": 0, 00:10:19.079 "w_mbytes_per_sec": 0 00:10:19.079 }, 00:10:19.079 "claimed": true, 00:10:19.079 "claim_type": "exclusive_write", 00:10:19.079 "zoned": false, 00:10:19.079 "supported_io_types": { 00:10:19.079 "read": true, 00:10:19.079 "write": true, 00:10:19.079 "unmap": true, 00:10:19.079 "flush": true, 00:10:19.079 "reset": true, 00:10:19.079 "nvme_admin": false, 00:10:19.079 "nvme_io": false, 00:10:19.079 "nvme_io_md": false, 00:10:19.079 "write_zeroes": true, 00:10:19.079 "zcopy": true, 00:10:19.079 "get_zone_info": false, 00:10:19.079 "zone_management": false, 00:10:19.079 "zone_append": false, 00:10:19.079 "compare": false, 00:10:19.079 "compare_and_write": false, 00:10:19.079 "abort": true, 00:10:19.079 "seek_hole": false, 00:10:19.079 "seek_data": false, 00:10:19.079 "copy": true, 00:10:19.079 "nvme_iov_md": false 00:10:19.079 }, 00:10:19.079 "memory_domains": [ 00:10:19.079 { 00:10:19.079 "dma_device_id": "system", 00:10:19.079 "dma_device_type": 1 00:10:19.079 }, 00:10:19.079 { 00:10:19.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.079 "dma_device_type": 2 00:10:19.079 } 00:10:19.079 ], 00:10:19.079 "driver_specific": {} 00:10:19.079 } 00:10:19.079 ] 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.079 "name": "Existed_Raid", 00:10:19.079 "uuid": "0f6bc8ad-05bb-499b-9c3c-dd540baaa0cd", 00:10:19.079 "strip_size_kb": 64, 00:10:19.079 "state": "configuring", 00:10:19.079 "raid_level": "raid0", 00:10:19.079 "superblock": true, 00:10:19.079 "num_base_bdevs": 4, 00:10:19.079 "num_base_bdevs_discovered": 2, 00:10:19.079 "num_base_bdevs_operational": 4, 00:10:19.079 "base_bdevs_list": [ 00:10:19.079 { 00:10:19.079 "name": "BaseBdev1", 00:10:19.079 "uuid": "9e31a4ef-3a96-45f7-85c2-38b96d64d737", 00:10:19.079 "is_configured": true, 00:10:19.079 "data_offset": 2048, 00:10:19.079 "data_size": 63488 00:10:19.079 }, 00:10:19.079 { 00:10:19.079 "name": "BaseBdev2", 00:10:19.079 "uuid": "2bc7c69e-e03f-427a-91ba-360da97a10fc", 00:10:19.079 "is_configured": true, 00:10:19.079 "data_offset": 2048, 00:10:19.079 "data_size": 63488 00:10:19.079 }, 00:10:19.079 { 00:10:19.079 "name": "BaseBdev3", 00:10:19.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.079 "is_configured": false, 00:10:19.079 "data_offset": 0, 00:10:19.079 "data_size": 0 00:10:19.079 }, 00:10:19.079 { 00:10:19.079 "name": "BaseBdev4", 00:10:19.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.079 "is_configured": false, 00:10:19.079 "data_offset": 0, 00:10:19.079 "data_size": 0 00:10:19.079 } 00:10:19.079 ] 00:10:19.079 }' 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.079 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.649 [2024-11-27 11:48:45.787333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.649 BaseBdev3 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.649 [ 00:10:19.649 { 00:10:19.649 "name": "BaseBdev3", 00:10:19.649 "aliases": [ 00:10:19.649 "2d3f75b7-9802-4fa9-a1cb-64ae7f3fd9ed" 00:10:19.649 ], 00:10:19.649 "product_name": "Malloc disk", 00:10:19.649 "block_size": 512, 00:10:19.649 "num_blocks": 65536, 00:10:19.649 "uuid": "2d3f75b7-9802-4fa9-a1cb-64ae7f3fd9ed", 00:10:19.649 "assigned_rate_limits": { 00:10:19.649 "rw_ios_per_sec": 0, 00:10:19.649 "rw_mbytes_per_sec": 0, 00:10:19.649 "r_mbytes_per_sec": 0, 00:10:19.649 "w_mbytes_per_sec": 0 00:10:19.649 }, 00:10:19.649 "claimed": true, 00:10:19.649 "claim_type": "exclusive_write", 00:10:19.649 "zoned": false, 00:10:19.649 "supported_io_types": { 00:10:19.649 "read": true, 00:10:19.649 "write": true, 00:10:19.649 "unmap": true, 00:10:19.649 "flush": true, 00:10:19.649 "reset": true, 00:10:19.649 "nvme_admin": false, 00:10:19.649 "nvme_io": false, 00:10:19.649 "nvme_io_md": false, 00:10:19.649 "write_zeroes": true, 00:10:19.649 "zcopy": true, 00:10:19.649 "get_zone_info": false, 00:10:19.649 "zone_management": false, 00:10:19.649 "zone_append": false, 00:10:19.649 "compare": false, 00:10:19.649 "compare_and_write": false, 00:10:19.649 "abort": true, 00:10:19.649 "seek_hole": false, 00:10:19.649 "seek_data": false, 00:10:19.649 "copy": true, 00:10:19.649 "nvme_iov_md": false 00:10:19.649 }, 00:10:19.649 "memory_domains": [ 00:10:19.649 { 00:10:19.649 "dma_device_id": "system", 00:10:19.649 "dma_device_type": 1 00:10:19.649 }, 00:10:19.649 { 00:10:19.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.649 "dma_device_type": 2 00:10:19.649 } 00:10:19.649 ], 00:10:19.649 "driver_specific": {} 00:10:19.649 } 00:10:19.649 ] 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.649 "name": "Existed_Raid", 00:10:19.649 "uuid": "0f6bc8ad-05bb-499b-9c3c-dd540baaa0cd", 00:10:19.649 "strip_size_kb": 64, 00:10:19.649 "state": "configuring", 00:10:19.649 "raid_level": "raid0", 00:10:19.649 "superblock": true, 00:10:19.649 "num_base_bdevs": 4, 00:10:19.649 "num_base_bdevs_discovered": 3, 00:10:19.649 "num_base_bdevs_operational": 4, 00:10:19.649 "base_bdevs_list": [ 00:10:19.649 { 00:10:19.649 "name": "BaseBdev1", 00:10:19.649 "uuid": "9e31a4ef-3a96-45f7-85c2-38b96d64d737", 00:10:19.649 "is_configured": true, 00:10:19.649 "data_offset": 2048, 00:10:19.649 "data_size": 63488 00:10:19.649 }, 00:10:19.649 { 00:10:19.649 "name": "BaseBdev2", 00:10:19.649 "uuid": "2bc7c69e-e03f-427a-91ba-360da97a10fc", 00:10:19.649 "is_configured": true, 00:10:19.649 "data_offset": 2048, 00:10:19.649 "data_size": 63488 00:10:19.649 }, 00:10:19.649 { 00:10:19.649 "name": "BaseBdev3", 00:10:19.649 "uuid": "2d3f75b7-9802-4fa9-a1cb-64ae7f3fd9ed", 00:10:19.649 "is_configured": true, 00:10:19.649 "data_offset": 2048, 00:10:19.649 "data_size": 63488 00:10:19.649 }, 00:10:19.649 { 00:10:19.649 "name": "BaseBdev4", 00:10:19.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.649 "is_configured": false, 00:10:19.649 "data_offset": 0, 00:10:19.649 "data_size": 0 00:10:19.649 } 00:10:19.649 ] 00:10:19.649 }' 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.649 11:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.909 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:19.909 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.909 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.169 [2024-11-27 11:48:46.325772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:20.169 BaseBdev4 00:10:20.169 [2024-11-27 11:48:46.326200] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:20.169 [2024-11-27 11:48:46.326223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:20.169 [2024-11-27 11:48:46.326520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:20.169 [2024-11-27 11:48:46.326687] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:20.169 [2024-11-27 11:48:46.326699] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:20.169 [2024-11-27 11:48:46.326864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.169 [ 00:10:20.169 { 00:10:20.169 "name": "BaseBdev4", 00:10:20.169 "aliases": [ 00:10:20.169 "66aa81ea-6d9d-4619-ba91-d84c35a26dfd" 00:10:20.169 ], 00:10:20.169 "product_name": "Malloc disk", 00:10:20.169 "block_size": 512, 00:10:20.169 "num_blocks": 65536, 00:10:20.169 "uuid": "66aa81ea-6d9d-4619-ba91-d84c35a26dfd", 00:10:20.169 "assigned_rate_limits": { 00:10:20.169 "rw_ios_per_sec": 0, 00:10:20.169 "rw_mbytes_per_sec": 0, 00:10:20.169 "r_mbytes_per_sec": 0, 00:10:20.169 "w_mbytes_per_sec": 0 00:10:20.169 }, 00:10:20.169 "claimed": true, 00:10:20.169 "claim_type": "exclusive_write", 00:10:20.169 "zoned": false, 00:10:20.169 "supported_io_types": { 00:10:20.169 "read": true, 00:10:20.169 "write": true, 00:10:20.169 "unmap": true, 00:10:20.169 "flush": true, 00:10:20.169 "reset": true, 00:10:20.169 "nvme_admin": false, 00:10:20.169 "nvme_io": false, 00:10:20.169 "nvme_io_md": false, 00:10:20.169 "write_zeroes": true, 00:10:20.169 "zcopy": true, 00:10:20.169 "get_zone_info": false, 00:10:20.169 "zone_management": false, 00:10:20.169 "zone_append": false, 00:10:20.169 "compare": false, 00:10:20.169 "compare_and_write": false, 00:10:20.169 "abort": true, 00:10:20.169 "seek_hole": false, 00:10:20.169 "seek_data": false, 00:10:20.169 "copy": true, 00:10:20.169 "nvme_iov_md": false 00:10:20.169 }, 00:10:20.169 "memory_domains": [ 00:10:20.169 { 00:10:20.169 "dma_device_id": "system", 00:10:20.169 "dma_device_type": 1 00:10:20.169 }, 00:10:20.169 { 00:10:20.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.169 "dma_device_type": 2 00:10:20.169 } 00:10:20.169 ], 00:10:20.169 "driver_specific": {} 00:10:20.169 } 00:10:20.169 ] 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.169 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.169 "name": "Existed_Raid", 00:10:20.169 "uuid": "0f6bc8ad-05bb-499b-9c3c-dd540baaa0cd", 00:10:20.169 "strip_size_kb": 64, 00:10:20.169 "state": "online", 00:10:20.169 "raid_level": "raid0", 00:10:20.169 "superblock": true, 00:10:20.169 "num_base_bdevs": 4, 00:10:20.169 "num_base_bdevs_discovered": 4, 00:10:20.169 "num_base_bdevs_operational": 4, 00:10:20.169 "base_bdevs_list": [ 00:10:20.169 { 00:10:20.169 "name": "BaseBdev1", 00:10:20.169 "uuid": "9e31a4ef-3a96-45f7-85c2-38b96d64d737", 00:10:20.169 "is_configured": true, 00:10:20.169 "data_offset": 2048, 00:10:20.169 "data_size": 63488 00:10:20.169 }, 00:10:20.169 { 00:10:20.169 "name": "BaseBdev2", 00:10:20.169 "uuid": "2bc7c69e-e03f-427a-91ba-360da97a10fc", 00:10:20.169 "is_configured": true, 00:10:20.169 "data_offset": 2048, 00:10:20.169 "data_size": 63488 00:10:20.169 }, 00:10:20.169 { 00:10:20.169 "name": "BaseBdev3", 00:10:20.170 "uuid": "2d3f75b7-9802-4fa9-a1cb-64ae7f3fd9ed", 00:10:20.170 "is_configured": true, 00:10:20.170 "data_offset": 2048, 00:10:20.170 "data_size": 63488 00:10:20.170 }, 00:10:20.170 { 00:10:20.170 "name": "BaseBdev4", 00:10:20.170 "uuid": "66aa81ea-6d9d-4619-ba91-d84c35a26dfd", 00:10:20.170 "is_configured": true, 00:10:20.170 "data_offset": 2048, 00:10:20.170 "data_size": 63488 00:10:20.170 } 00:10:20.170 ] 00:10:20.170 }' 00:10:20.170 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.170 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.430 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:20.430 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:20.430 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:20.430 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:20.430 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:20.430 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:20.689 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:20.689 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.689 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.689 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.689 [2024-11-27 11:48:46.821348] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.689 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.689 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.689 "name": "Existed_Raid", 00:10:20.689 "aliases": [ 00:10:20.689 "0f6bc8ad-05bb-499b-9c3c-dd540baaa0cd" 00:10:20.689 ], 00:10:20.689 "product_name": "Raid Volume", 00:10:20.689 "block_size": 512, 00:10:20.689 "num_blocks": 253952, 00:10:20.689 "uuid": "0f6bc8ad-05bb-499b-9c3c-dd540baaa0cd", 00:10:20.689 "assigned_rate_limits": { 00:10:20.689 "rw_ios_per_sec": 0, 00:10:20.689 "rw_mbytes_per_sec": 0, 00:10:20.689 "r_mbytes_per_sec": 0, 00:10:20.689 "w_mbytes_per_sec": 0 00:10:20.689 }, 00:10:20.689 "claimed": false, 00:10:20.689 "zoned": false, 00:10:20.689 "supported_io_types": { 00:10:20.689 "read": true, 00:10:20.689 "write": true, 00:10:20.689 "unmap": true, 00:10:20.689 "flush": true, 00:10:20.689 "reset": true, 00:10:20.689 "nvme_admin": false, 00:10:20.689 "nvme_io": false, 00:10:20.689 "nvme_io_md": false, 00:10:20.689 "write_zeroes": true, 00:10:20.689 "zcopy": false, 00:10:20.689 "get_zone_info": false, 00:10:20.689 "zone_management": false, 00:10:20.689 "zone_append": false, 00:10:20.689 "compare": false, 00:10:20.689 "compare_and_write": false, 00:10:20.689 "abort": false, 00:10:20.689 "seek_hole": false, 00:10:20.689 "seek_data": false, 00:10:20.689 "copy": false, 00:10:20.689 "nvme_iov_md": false 00:10:20.689 }, 00:10:20.689 "memory_domains": [ 00:10:20.689 { 00:10:20.689 "dma_device_id": "system", 00:10:20.689 "dma_device_type": 1 00:10:20.689 }, 00:10:20.689 { 00:10:20.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.689 "dma_device_type": 2 00:10:20.689 }, 00:10:20.689 { 00:10:20.689 "dma_device_id": "system", 00:10:20.689 "dma_device_type": 1 00:10:20.689 }, 00:10:20.689 { 00:10:20.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.689 "dma_device_type": 2 00:10:20.689 }, 00:10:20.689 { 00:10:20.689 "dma_device_id": "system", 00:10:20.689 "dma_device_type": 1 00:10:20.689 }, 00:10:20.689 { 00:10:20.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.689 "dma_device_type": 2 00:10:20.689 }, 00:10:20.689 { 00:10:20.689 "dma_device_id": "system", 00:10:20.689 "dma_device_type": 1 00:10:20.689 }, 00:10:20.689 { 00:10:20.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.689 "dma_device_type": 2 00:10:20.689 } 00:10:20.689 ], 00:10:20.689 "driver_specific": { 00:10:20.689 "raid": { 00:10:20.689 "uuid": "0f6bc8ad-05bb-499b-9c3c-dd540baaa0cd", 00:10:20.689 "strip_size_kb": 64, 00:10:20.689 "state": "online", 00:10:20.689 "raid_level": "raid0", 00:10:20.689 "superblock": true, 00:10:20.689 "num_base_bdevs": 4, 00:10:20.689 "num_base_bdevs_discovered": 4, 00:10:20.689 "num_base_bdevs_operational": 4, 00:10:20.689 "base_bdevs_list": [ 00:10:20.689 { 00:10:20.689 "name": "BaseBdev1", 00:10:20.689 "uuid": "9e31a4ef-3a96-45f7-85c2-38b96d64d737", 00:10:20.689 "is_configured": true, 00:10:20.689 "data_offset": 2048, 00:10:20.689 "data_size": 63488 00:10:20.689 }, 00:10:20.689 { 00:10:20.689 "name": "BaseBdev2", 00:10:20.689 "uuid": "2bc7c69e-e03f-427a-91ba-360da97a10fc", 00:10:20.689 "is_configured": true, 00:10:20.689 "data_offset": 2048, 00:10:20.689 "data_size": 63488 00:10:20.689 }, 00:10:20.689 { 00:10:20.689 "name": "BaseBdev3", 00:10:20.690 "uuid": "2d3f75b7-9802-4fa9-a1cb-64ae7f3fd9ed", 00:10:20.690 "is_configured": true, 00:10:20.690 "data_offset": 2048, 00:10:20.690 "data_size": 63488 00:10:20.690 }, 00:10:20.690 { 00:10:20.690 "name": "BaseBdev4", 00:10:20.690 "uuid": "66aa81ea-6d9d-4619-ba91-d84c35a26dfd", 00:10:20.690 "is_configured": true, 00:10:20.690 "data_offset": 2048, 00:10:20.690 "data_size": 63488 00:10:20.690 } 00:10:20.690 ] 00:10:20.690 } 00:10:20.690 } 00:10:20.690 }' 00:10:20.690 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.690 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:20.690 BaseBdev2 00:10:20.690 BaseBdev3 00:10:20.690 BaseBdev4' 00:10:20.690 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.690 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.690 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.690 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:20.690 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.690 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.690 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.690 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.690 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.690 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.690 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.690 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.690 11:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:20.690 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.690 11:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.690 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.690 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.690 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.690 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.690 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:20.690 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.690 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.690 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.690 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.950 [2024-11-27 11:48:47.140525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:20.950 [2024-11-27 11:48:47.140608] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.950 [2024-11-27 11:48:47.140674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.950 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.950 "name": "Existed_Raid", 00:10:20.950 "uuid": "0f6bc8ad-05bb-499b-9c3c-dd540baaa0cd", 00:10:20.950 "strip_size_kb": 64, 00:10:20.950 "state": "offline", 00:10:20.951 "raid_level": "raid0", 00:10:20.951 "superblock": true, 00:10:20.951 "num_base_bdevs": 4, 00:10:20.951 "num_base_bdevs_discovered": 3, 00:10:20.951 "num_base_bdevs_operational": 3, 00:10:20.951 "base_bdevs_list": [ 00:10:20.951 { 00:10:20.951 "name": null, 00:10:20.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.951 "is_configured": false, 00:10:20.951 "data_offset": 0, 00:10:20.951 "data_size": 63488 00:10:20.951 }, 00:10:20.951 { 00:10:20.951 "name": "BaseBdev2", 00:10:20.951 "uuid": "2bc7c69e-e03f-427a-91ba-360da97a10fc", 00:10:20.951 "is_configured": true, 00:10:20.951 "data_offset": 2048, 00:10:20.951 "data_size": 63488 00:10:20.951 }, 00:10:20.951 { 00:10:20.951 "name": "BaseBdev3", 00:10:20.951 "uuid": "2d3f75b7-9802-4fa9-a1cb-64ae7f3fd9ed", 00:10:20.951 "is_configured": true, 00:10:20.951 "data_offset": 2048, 00:10:20.951 "data_size": 63488 00:10:20.951 }, 00:10:20.951 { 00:10:20.951 "name": "BaseBdev4", 00:10:20.951 "uuid": "66aa81ea-6d9d-4619-ba91-d84c35a26dfd", 00:10:20.951 "is_configured": true, 00:10:20.951 "data_offset": 2048, 00:10:20.951 "data_size": 63488 00:10:20.951 } 00:10:20.951 ] 00:10:20.951 }' 00:10:20.951 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.951 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.521 [2024-11-27 11:48:47.734744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.521 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.521 [2024-11-27 11:48:47.887839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:21.782 11:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.782 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:21.782 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:21.782 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.782 11:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:21.782 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.782 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.782 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.782 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:21.782 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:21.782 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:21.782 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.782 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.782 [2024-11-27 11:48:48.053325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:21.782 [2024-11-27 11:48:48.053438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:21.782 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.782 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:21.782 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:21.782 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.782 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:21.782 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.782 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.042 BaseBdev2 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.042 [ 00:10:22.042 { 00:10:22.042 "name": "BaseBdev2", 00:10:22.042 "aliases": [ 00:10:22.042 "ab4c4c17-f380-4779-902e-8dd3d67f3442" 00:10:22.042 ], 00:10:22.042 "product_name": "Malloc disk", 00:10:22.042 "block_size": 512, 00:10:22.042 "num_blocks": 65536, 00:10:22.042 "uuid": "ab4c4c17-f380-4779-902e-8dd3d67f3442", 00:10:22.042 "assigned_rate_limits": { 00:10:22.042 "rw_ios_per_sec": 0, 00:10:22.042 "rw_mbytes_per_sec": 0, 00:10:22.042 "r_mbytes_per_sec": 0, 00:10:22.042 "w_mbytes_per_sec": 0 00:10:22.042 }, 00:10:22.042 "claimed": false, 00:10:22.042 "zoned": false, 00:10:22.042 "supported_io_types": { 00:10:22.042 "read": true, 00:10:22.042 "write": true, 00:10:22.042 "unmap": true, 00:10:22.042 "flush": true, 00:10:22.042 "reset": true, 00:10:22.042 "nvme_admin": false, 00:10:22.042 "nvme_io": false, 00:10:22.042 "nvme_io_md": false, 00:10:22.042 "write_zeroes": true, 00:10:22.042 "zcopy": true, 00:10:22.042 "get_zone_info": false, 00:10:22.042 "zone_management": false, 00:10:22.042 "zone_append": false, 00:10:22.042 "compare": false, 00:10:22.042 "compare_and_write": false, 00:10:22.042 "abort": true, 00:10:22.042 "seek_hole": false, 00:10:22.042 "seek_data": false, 00:10:22.042 "copy": true, 00:10:22.042 "nvme_iov_md": false 00:10:22.042 }, 00:10:22.042 "memory_domains": [ 00:10:22.042 { 00:10:22.042 "dma_device_id": "system", 00:10:22.042 "dma_device_type": 1 00:10:22.042 }, 00:10:22.042 { 00:10:22.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.042 "dma_device_type": 2 00:10:22.042 } 00:10:22.042 ], 00:10:22.042 "driver_specific": {} 00:10:22.042 } 00:10:22.042 ] 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.042 BaseBdev3 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.042 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.043 [ 00:10:22.043 { 00:10:22.043 "name": "BaseBdev3", 00:10:22.043 "aliases": [ 00:10:22.043 "7c97db69-c1d7-490e-aca2-99f2343d3de1" 00:10:22.043 ], 00:10:22.043 "product_name": "Malloc disk", 00:10:22.043 "block_size": 512, 00:10:22.043 "num_blocks": 65536, 00:10:22.043 "uuid": "7c97db69-c1d7-490e-aca2-99f2343d3de1", 00:10:22.043 "assigned_rate_limits": { 00:10:22.043 "rw_ios_per_sec": 0, 00:10:22.043 "rw_mbytes_per_sec": 0, 00:10:22.043 "r_mbytes_per_sec": 0, 00:10:22.043 "w_mbytes_per_sec": 0 00:10:22.043 }, 00:10:22.043 "claimed": false, 00:10:22.043 "zoned": false, 00:10:22.043 "supported_io_types": { 00:10:22.043 "read": true, 00:10:22.043 "write": true, 00:10:22.043 "unmap": true, 00:10:22.043 "flush": true, 00:10:22.043 "reset": true, 00:10:22.043 "nvme_admin": false, 00:10:22.043 "nvme_io": false, 00:10:22.043 "nvme_io_md": false, 00:10:22.043 "write_zeroes": true, 00:10:22.043 "zcopy": true, 00:10:22.043 "get_zone_info": false, 00:10:22.043 "zone_management": false, 00:10:22.043 "zone_append": false, 00:10:22.043 "compare": false, 00:10:22.043 "compare_and_write": false, 00:10:22.043 "abort": true, 00:10:22.043 "seek_hole": false, 00:10:22.043 "seek_data": false, 00:10:22.043 "copy": true, 00:10:22.043 "nvme_iov_md": false 00:10:22.043 }, 00:10:22.043 "memory_domains": [ 00:10:22.043 { 00:10:22.043 "dma_device_id": "system", 00:10:22.043 "dma_device_type": 1 00:10:22.043 }, 00:10:22.043 { 00:10:22.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.043 "dma_device_type": 2 00:10:22.043 } 00:10:22.043 ], 00:10:22.043 "driver_specific": {} 00:10:22.043 } 00:10:22.043 ] 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.043 BaseBdev4 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.043 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.303 [ 00:10:22.303 { 00:10:22.303 "name": "BaseBdev4", 00:10:22.303 "aliases": [ 00:10:22.303 "2385cadd-492f-4bd0-a2f4-e984ac6499fd" 00:10:22.303 ], 00:10:22.303 "product_name": "Malloc disk", 00:10:22.303 "block_size": 512, 00:10:22.303 "num_blocks": 65536, 00:10:22.303 "uuid": "2385cadd-492f-4bd0-a2f4-e984ac6499fd", 00:10:22.303 "assigned_rate_limits": { 00:10:22.303 "rw_ios_per_sec": 0, 00:10:22.303 "rw_mbytes_per_sec": 0, 00:10:22.303 "r_mbytes_per_sec": 0, 00:10:22.303 "w_mbytes_per_sec": 0 00:10:22.303 }, 00:10:22.303 "claimed": false, 00:10:22.303 "zoned": false, 00:10:22.303 "supported_io_types": { 00:10:22.303 "read": true, 00:10:22.303 "write": true, 00:10:22.303 "unmap": true, 00:10:22.303 "flush": true, 00:10:22.303 "reset": true, 00:10:22.303 "nvme_admin": false, 00:10:22.303 "nvme_io": false, 00:10:22.303 "nvme_io_md": false, 00:10:22.303 "write_zeroes": true, 00:10:22.303 "zcopy": true, 00:10:22.303 "get_zone_info": false, 00:10:22.303 "zone_management": false, 00:10:22.303 "zone_append": false, 00:10:22.303 "compare": false, 00:10:22.303 "compare_and_write": false, 00:10:22.303 "abort": true, 00:10:22.303 "seek_hole": false, 00:10:22.303 "seek_data": false, 00:10:22.303 "copy": true, 00:10:22.303 "nvme_iov_md": false 00:10:22.303 }, 00:10:22.303 "memory_domains": [ 00:10:22.303 { 00:10:22.303 "dma_device_id": "system", 00:10:22.303 "dma_device_type": 1 00:10:22.303 }, 00:10:22.303 { 00:10:22.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.303 "dma_device_type": 2 00:10:22.303 } 00:10:22.303 ], 00:10:22.303 "driver_specific": {} 00:10:22.303 } 00:10:22.303 ] 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.303 [2024-11-27 11:48:48.451814] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.303 [2024-11-27 11:48:48.451913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.303 [2024-11-27 11:48:48.451976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.303 [2024-11-27 11:48:48.453909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:22.303 [2024-11-27 11:48:48.453994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.303 "name": "Existed_Raid", 00:10:22.303 "uuid": "ec579f13-a5d9-4e65-a014-8d2dc8a7d8b5", 00:10:22.303 "strip_size_kb": 64, 00:10:22.303 "state": "configuring", 00:10:22.303 "raid_level": "raid0", 00:10:22.303 "superblock": true, 00:10:22.303 "num_base_bdevs": 4, 00:10:22.303 "num_base_bdevs_discovered": 3, 00:10:22.303 "num_base_bdevs_operational": 4, 00:10:22.303 "base_bdevs_list": [ 00:10:22.303 { 00:10:22.303 "name": "BaseBdev1", 00:10:22.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.303 "is_configured": false, 00:10:22.303 "data_offset": 0, 00:10:22.303 "data_size": 0 00:10:22.303 }, 00:10:22.303 { 00:10:22.303 "name": "BaseBdev2", 00:10:22.303 "uuid": "ab4c4c17-f380-4779-902e-8dd3d67f3442", 00:10:22.303 "is_configured": true, 00:10:22.303 "data_offset": 2048, 00:10:22.303 "data_size": 63488 00:10:22.303 }, 00:10:22.303 { 00:10:22.303 "name": "BaseBdev3", 00:10:22.303 "uuid": "7c97db69-c1d7-490e-aca2-99f2343d3de1", 00:10:22.303 "is_configured": true, 00:10:22.303 "data_offset": 2048, 00:10:22.303 "data_size": 63488 00:10:22.303 }, 00:10:22.303 { 00:10:22.303 "name": "BaseBdev4", 00:10:22.303 "uuid": "2385cadd-492f-4bd0-a2f4-e984ac6499fd", 00:10:22.303 "is_configured": true, 00:10:22.303 "data_offset": 2048, 00:10:22.303 "data_size": 63488 00:10:22.303 } 00:10:22.303 ] 00:10:22.303 }' 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.303 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.563 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:22.563 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.563 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.563 [2024-11-27 11:48:48.935028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:22.563 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.563 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:22.563 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.563 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.563 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.563 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.563 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.563 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.563 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.563 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.563 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.822 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.822 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.822 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.822 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.822 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.822 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.822 "name": "Existed_Raid", 00:10:22.822 "uuid": "ec579f13-a5d9-4e65-a014-8d2dc8a7d8b5", 00:10:22.822 "strip_size_kb": 64, 00:10:22.822 "state": "configuring", 00:10:22.822 "raid_level": "raid0", 00:10:22.822 "superblock": true, 00:10:22.822 "num_base_bdevs": 4, 00:10:22.822 "num_base_bdevs_discovered": 2, 00:10:22.822 "num_base_bdevs_operational": 4, 00:10:22.822 "base_bdevs_list": [ 00:10:22.822 { 00:10:22.822 "name": "BaseBdev1", 00:10:22.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.822 "is_configured": false, 00:10:22.822 "data_offset": 0, 00:10:22.822 "data_size": 0 00:10:22.822 }, 00:10:22.822 { 00:10:22.822 "name": null, 00:10:22.822 "uuid": "ab4c4c17-f380-4779-902e-8dd3d67f3442", 00:10:22.822 "is_configured": false, 00:10:22.822 "data_offset": 0, 00:10:22.822 "data_size": 63488 00:10:22.822 }, 00:10:22.822 { 00:10:22.822 "name": "BaseBdev3", 00:10:22.822 "uuid": "7c97db69-c1d7-490e-aca2-99f2343d3de1", 00:10:22.822 "is_configured": true, 00:10:22.822 "data_offset": 2048, 00:10:22.822 "data_size": 63488 00:10:22.822 }, 00:10:22.822 { 00:10:22.822 "name": "BaseBdev4", 00:10:22.822 "uuid": "2385cadd-492f-4bd0-a2f4-e984ac6499fd", 00:10:22.822 "is_configured": true, 00:10:22.822 "data_offset": 2048, 00:10:22.822 "data_size": 63488 00:10:22.822 } 00:10:22.822 ] 00:10:22.822 }' 00:10:22.822 11:48:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.822 11:48:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.079 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.079 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:23.079 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.079 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.079 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.079 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:23.079 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:23.079 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.079 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.336 [2024-11-27 11:48:49.490296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.336 BaseBdev1 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.337 [ 00:10:23.337 { 00:10:23.337 "name": "BaseBdev1", 00:10:23.337 "aliases": [ 00:10:23.337 "bb1856f6-2725-4b87-a643-a0e7c4f029c2" 00:10:23.337 ], 00:10:23.337 "product_name": "Malloc disk", 00:10:23.337 "block_size": 512, 00:10:23.337 "num_blocks": 65536, 00:10:23.337 "uuid": "bb1856f6-2725-4b87-a643-a0e7c4f029c2", 00:10:23.337 "assigned_rate_limits": { 00:10:23.337 "rw_ios_per_sec": 0, 00:10:23.337 "rw_mbytes_per_sec": 0, 00:10:23.337 "r_mbytes_per_sec": 0, 00:10:23.337 "w_mbytes_per_sec": 0 00:10:23.337 }, 00:10:23.337 "claimed": true, 00:10:23.337 "claim_type": "exclusive_write", 00:10:23.337 "zoned": false, 00:10:23.337 "supported_io_types": { 00:10:23.337 "read": true, 00:10:23.337 "write": true, 00:10:23.337 "unmap": true, 00:10:23.337 "flush": true, 00:10:23.337 "reset": true, 00:10:23.337 "nvme_admin": false, 00:10:23.337 "nvme_io": false, 00:10:23.337 "nvme_io_md": false, 00:10:23.337 "write_zeroes": true, 00:10:23.337 "zcopy": true, 00:10:23.337 "get_zone_info": false, 00:10:23.337 "zone_management": false, 00:10:23.337 "zone_append": false, 00:10:23.337 "compare": false, 00:10:23.337 "compare_and_write": false, 00:10:23.337 "abort": true, 00:10:23.337 "seek_hole": false, 00:10:23.337 "seek_data": false, 00:10:23.337 "copy": true, 00:10:23.337 "nvme_iov_md": false 00:10:23.337 }, 00:10:23.337 "memory_domains": [ 00:10:23.337 { 00:10:23.337 "dma_device_id": "system", 00:10:23.337 "dma_device_type": 1 00:10:23.337 }, 00:10:23.337 { 00:10:23.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.337 "dma_device_type": 2 00:10:23.337 } 00:10:23.337 ], 00:10:23.337 "driver_specific": {} 00:10:23.337 } 00:10:23.337 ] 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.337 "name": "Existed_Raid", 00:10:23.337 "uuid": "ec579f13-a5d9-4e65-a014-8d2dc8a7d8b5", 00:10:23.337 "strip_size_kb": 64, 00:10:23.337 "state": "configuring", 00:10:23.337 "raid_level": "raid0", 00:10:23.337 "superblock": true, 00:10:23.337 "num_base_bdevs": 4, 00:10:23.337 "num_base_bdevs_discovered": 3, 00:10:23.337 "num_base_bdevs_operational": 4, 00:10:23.337 "base_bdevs_list": [ 00:10:23.337 { 00:10:23.337 "name": "BaseBdev1", 00:10:23.337 "uuid": "bb1856f6-2725-4b87-a643-a0e7c4f029c2", 00:10:23.337 "is_configured": true, 00:10:23.337 "data_offset": 2048, 00:10:23.337 "data_size": 63488 00:10:23.337 }, 00:10:23.337 { 00:10:23.337 "name": null, 00:10:23.337 "uuid": "ab4c4c17-f380-4779-902e-8dd3d67f3442", 00:10:23.337 "is_configured": false, 00:10:23.337 "data_offset": 0, 00:10:23.337 "data_size": 63488 00:10:23.337 }, 00:10:23.337 { 00:10:23.337 "name": "BaseBdev3", 00:10:23.337 "uuid": "7c97db69-c1d7-490e-aca2-99f2343d3de1", 00:10:23.337 "is_configured": true, 00:10:23.337 "data_offset": 2048, 00:10:23.337 "data_size": 63488 00:10:23.337 }, 00:10:23.337 { 00:10:23.337 "name": "BaseBdev4", 00:10:23.337 "uuid": "2385cadd-492f-4bd0-a2f4-e984ac6499fd", 00:10:23.337 "is_configured": true, 00:10:23.337 "data_offset": 2048, 00:10:23.337 "data_size": 63488 00:10:23.337 } 00:10:23.337 ] 00:10:23.337 }' 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.337 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.906 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.906 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.906 11:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:23.906 11:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.906 [2024-11-27 11:48:50.037479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.906 "name": "Existed_Raid", 00:10:23.906 "uuid": "ec579f13-a5d9-4e65-a014-8d2dc8a7d8b5", 00:10:23.906 "strip_size_kb": 64, 00:10:23.906 "state": "configuring", 00:10:23.906 "raid_level": "raid0", 00:10:23.906 "superblock": true, 00:10:23.906 "num_base_bdevs": 4, 00:10:23.906 "num_base_bdevs_discovered": 2, 00:10:23.906 "num_base_bdevs_operational": 4, 00:10:23.906 "base_bdevs_list": [ 00:10:23.906 { 00:10:23.906 "name": "BaseBdev1", 00:10:23.906 "uuid": "bb1856f6-2725-4b87-a643-a0e7c4f029c2", 00:10:23.906 "is_configured": true, 00:10:23.906 "data_offset": 2048, 00:10:23.906 "data_size": 63488 00:10:23.906 }, 00:10:23.906 { 00:10:23.906 "name": null, 00:10:23.906 "uuid": "ab4c4c17-f380-4779-902e-8dd3d67f3442", 00:10:23.906 "is_configured": false, 00:10:23.906 "data_offset": 0, 00:10:23.906 "data_size": 63488 00:10:23.906 }, 00:10:23.906 { 00:10:23.906 "name": null, 00:10:23.906 "uuid": "7c97db69-c1d7-490e-aca2-99f2343d3de1", 00:10:23.906 "is_configured": false, 00:10:23.906 "data_offset": 0, 00:10:23.906 "data_size": 63488 00:10:23.906 }, 00:10:23.906 { 00:10:23.906 "name": "BaseBdev4", 00:10:23.906 "uuid": "2385cadd-492f-4bd0-a2f4-e984ac6499fd", 00:10:23.906 "is_configured": true, 00:10:23.906 "data_offset": 2048, 00:10:23.906 "data_size": 63488 00:10:23.906 } 00:10:23.906 ] 00:10:23.906 }' 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.906 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.168 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.168 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.168 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.168 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:24.168 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.426 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:24.426 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.427 [2024-11-27 11:48:50.560565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.427 "name": "Existed_Raid", 00:10:24.427 "uuid": "ec579f13-a5d9-4e65-a014-8d2dc8a7d8b5", 00:10:24.427 "strip_size_kb": 64, 00:10:24.427 "state": "configuring", 00:10:24.427 "raid_level": "raid0", 00:10:24.427 "superblock": true, 00:10:24.427 "num_base_bdevs": 4, 00:10:24.427 "num_base_bdevs_discovered": 3, 00:10:24.427 "num_base_bdevs_operational": 4, 00:10:24.427 "base_bdevs_list": [ 00:10:24.427 { 00:10:24.427 "name": "BaseBdev1", 00:10:24.427 "uuid": "bb1856f6-2725-4b87-a643-a0e7c4f029c2", 00:10:24.427 "is_configured": true, 00:10:24.427 "data_offset": 2048, 00:10:24.427 "data_size": 63488 00:10:24.427 }, 00:10:24.427 { 00:10:24.427 "name": null, 00:10:24.427 "uuid": "ab4c4c17-f380-4779-902e-8dd3d67f3442", 00:10:24.427 "is_configured": false, 00:10:24.427 "data_offset": 0, 00:10:24.427 "data_size": 63488 00:10:24.427 }, 00:10:24.427 { 00:10:24.427 "name": "BaseBdev3", 00:10:24.427 "uuid": "7c97db69-c1d7-490e-aca2-99f2343d3de1", 00:10:24.427 "is_configured": true, 00:10:24.427 "data_offset": 2048, 00:10:24.427 "data_size": 63488 00:10:24.427 }, 00:10:24.427 { 00:10:24.427 "name": "BaseBdev4", 00:10:24.427 "uuid": "2385cadd-492f-4bd0-a2f4-e984ac6499fd", 00:10:24.427 "is_configured": true, 00:10:24.427 "data_offset": 2048, 00:10:24.427 "data_size": 63488 00:10:24.427 } 00:10:24.427 ] 00:10:24.427 }' 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.427 11:48:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.686 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.686 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:24.686 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.686 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.686 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.945 [2024-11-27 11:48:51.099722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.945 "name": "Existed_Raid", 00:10:24.945 "uuid": "ec579f13-a5d9-4e65-a014-8d2dc8a7d8b5", 00:10:24.945 "strip_size_kb": 64, 00:10:24.945 "state": "configuring", 00:10:24.945 "raid_level": "raid0", 00:10:24.945 "superblock": true, 00:10:24.945 "num_base_bdevs": 4, 00:10:24.945 "num_base_bdevs_discovered": 2, 00:10:24.945 "num_base_bdevs_operational": 4, 00:10:24.945 "base_bdevs_list": [ 00:10:24.945 { 00:10:24.945 "name": null, 00:10:24.945 "uuid": "bb1856f6-2725-4b87-a643-a0e7c4f029c2", 00:10:24.945 "is_configured": false, 00:10:24.945 "data_offset": 0, 00:10:24.945 "data_size": 63488 00:10:24.945 }, 00:10:24.945 { 00:10:24.945 "name": null, 00:10:24.945 "uuid": "ab4c4c17-f380-4779-902e-8dd3d67f3442", 00:10:24.945 "is_configured": false, 00:10:24.945 "data_offset": 0, 00:10:24.945 "data_size": 63488 00:10:24.945 }, 00:10:24.945 { 00:10:24.945 "name": "BaseBdev3", 00:10:24.945 "uuid": "7c97db69-c1d7-490e-aca2-99f2343d3de1", 00:10:24.945 "is_configured": true, 00:10:24.945 "data_offset": 2048, 00:10:24.945 "data_size": 63488 00:10:24.945 }, 00:10:24.945 { 00:10:24.945 "name": "BaseBdev4", 00:10:24.945 "uuid": "2385cadd-492f-4bd0-a2f4-e984ac6499fd", 00:10:24.945 "is_configured": true, 00:10:24.945 "data_offset": 2048, 00:10:24.945 "data_size": 63488 00:10:24.945 } 00:10:24.945 ] 00:10:24.945 }' 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.945 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.513 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:25.513 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.513 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.513 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.513 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.513 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.514 [2024-11-27 11:48:51.695787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.514 "name": "Existed_Raid", 00:10:25.514 "uuid": "ec579f13-a5d9-4e65-a014-8d2dc8a7d8b5", 00:10:25.514 "strip_size_kb": 64, 00:10:25.514 "state": "configuring", 00:10:25.514 "raid_level": "raid0", 00:10:25.514 "superblock": true, 00:10:25.514 "num_base_bdevs": 4, 00:10:25.514 "num_base_bdevs_discovered": 3, 00:10:25.514 "num_base_bdevs_operational": 4, 00:10:25.514 "base_bdevs_list": [ 00:10:25.514 { 00:10:25.514 "name": null, 00:10:25.514 "uuid": "bb1856f6-2725-4b87-a643-a0e7c4f029c2", 00:10:25.514 "is_configured": false, 00:10:25.514 "data_offset": 0, 00:10:25.514 "data_size": 63488 00:10:25.514 }, 00:10:25.514 { 00:10:25.514 "name": "BaseBdev2", 00:10:25.514 "uuid": "ab4c4c17-f380-4779-902e-8dd3d67f3442", 00:10:25.514 "is_configured": true, 00:10:25.514 "data_offset": 2048, 00:10:25.514 "data_size": 63488 00:10:25.514 }, 00:10:25.514 { 00:10:25.514 "name": "BaseBdev3", 00:10:25.514 "uuid": "7c97db69-c1d7-490e-aca2-99f2343d3de1", 00:10:25.514 "is_configured": true, 00:10:25.514 "data_offset": 2048, 00:10:25.514 "data_size": 63488 00:10:25.514 }, 00:10:25.514 { 00:10:25.514 "name": "BaseBdev4", 00:10:25.514 "uuid": "2385cadd-492f-4bd0-a2f4-e984ac6499fd", 00:10:25.514 "is_configured": true, 00:10:25.514 "data_offset": 2048, 00:10:25.514 "data_size": 63488 00:10:25.514 } 00:10:25.514 ] 00:10:25.514 }' 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.514 11:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.773 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.773 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:25.773 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.773 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.773 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bb1856f6-2725-4b87-a643-a0e7c4f029c2 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.033 [2024-11-27 11:48:52.268559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:26.033 [2024-11-27 11:48:52.268798] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:26.033 [2024-11-27 11:48:52.268811] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:26.033 NewBaseBdev 00:10:26.033 [2024-11-27 11:48:52.269123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:26.033 [2024-11-27 11:48:52.269279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:26.033 [2024-11-27 11:48:52.269301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:26.033 [2024-11-27 11:48:52.269452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.033 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.033 [ 00:10:26.033 { 00:10:26.033 "name": "NewBaseBdev", 00:10:26.033 "aliases": [ 00:10:26.033 "bb1856f6-2725-4b87-a643-a0e7c4f029c2" 00:10:26.033 ], 00:10:26.033 "product_name": "Malloc disk", 00:10:26.033 "block_size": 512, 00:10:26.033 "num_blocks": 65536, 00:10:26.033 "uuid": "bb1856f6-2725-4b87-a643-a0e7c4f029c2", 00:10:26.033 "assigned_rate_limits": { 00:10:26.033 "rw_ios_per_sec": 0, 00:10:26.033 "rw_mbytes_per_sec": 0, 00:10:26.033 "r_mbytes_per_sec": 0, 00:10:26.033 "w_mbytes_per_sec": 0 00:10:26.033 }, 00:10:26.033 "claimed": true, 00:10:26.033 "claim_type": "exclusive_write", 00:10:26.033 "zoned": false, 00:10:26.033 "supported_io_types": { 00:10:26.033 "read": true, 00:10:26.033 "write": true, 00:10:26.033 "unmap": true, 00:10:26.033 "flush": true, 00:10:26.033 "reset": true, 00:10:26.033 "nvme_admin": false, 00:10:26.033 "nvme_io": false, 00:10:26.033 "nvme_io_md": false, 00:10:26.033 "write_zeroes": true, 00:10:26.033 "zcopy": true, 00:10:26.033 "get_zone_info": false, 00:10:26.033 "zone_management": false, 00:10:26.033 "zone_append": false, 00:10:26.033 "compare": false, 00:10:26.033 "compare_and_write": false, 00:10:26.033 "abort": true, 00:10:26.033 "seek_hole": false, 00:10:26.033 "seek_data": false, 00:10:26.033 "copy": true, 00:10:26.033 "nvme_iov_md": false 00:10:26.033 }, 00:10:26.033 "memory_domains": [ 00:10:26.033 { 00:10:26.033 "dma_device_id": "system", 00:10:26.033 "dma_device_type": 1 00:10:26.033 }, 00:10:26.033 { 00:10:26.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.033 "dma_device_type": 2 00:10:26.033 } 00:10:26.033 ], 00:10:26.033 "driver_specific": {} 00:10:26.033 } 00:10:26.033 ] 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.034 "name": "Existed_Raid", 00:10:26.034 "uuid": "ec579f13-a5d9-4e65-a014-8d2dc8a7d8b5", 00:10:26.034 "strip_size_kb": 64, 00:10:26.034 "state": "online", 00:10:26.034 "raid_level": "raid0", 00:10:26.034 "superblock": true, 00:10:26.034 "num_base_bdevs": 4, 00:10:26.034 "num_base_bdevs_discovered": 4, 00:10:26.034 "num_base_bdevs_operational": 4, 00:10:26.034 "base_bdevs_list": [ 00:10:26.034 { 00:10:26.034 "name": "NewBaseBdev", 00:10:26.034 "uuid": "bb1856f6-2725-4b87-a643-a0e7c4f029c2", 00:10:26.034 "is_configured": true, 00:10:26.034 "data_offset": 2048, 00:10:26.034 "data_size": 63488 00:10:26.034 }, 00:10:26.034 { 00:10:26.034 "name": "BaseBdev2", 00:10:26.034 "uuid": "ab4c4c17-f380-4779-902e-8dd3d67f3442", 00:10:26.034 "is_configured": true, 00:10:26.034 "data_offset": 2048, 00:10:26.034 "data_size": 63488 00:10:26.034 }, 00:10:26.034 { 00:10:26.034 "name": "BaseBdev3", 00:10:26.034 "uuid": "7c97db69-c1d7-490e-aca2-99f2343d3de1", 00:10:26.034 "is_configured": true, 00:10:26.034 "data_offset": 2048, 00:10:26.034 "data_size": 63488 00:10:26.034 }, 00:10:26.034 { 00:10:26.034 "name": "BaseBdev4", 00:10:26.034 "uuid": "2385cadd-492f-4bd0-a2f4-e984ac6499fd", 00:10:26.034 "is_configured": true, 00:10:26.034 "data_offset": 2048, 00:10:26.034 "data_size": 63488 00:10:26.034 } 00:10:26.034 ] 00:10:26.034 }' 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.034 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.603 [2024-11-27 11:48:52.692333] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:26.603 "name": "Existed_Raid", 00:10:26.603 "aliases": [ 00:10:26.603 "ec579f13-a5d9-4e65-a014-8d2dc8a7d8b5" 00:10:26.603 ], 00:10:26.603 "product_name": "Raid Volume", 00:10:26.603 "block_size": 512, 00:10:26.603 "num_blocks": 253952, 00:10:26.603 "uuid": "ec579f13-a5d9-4e65-a014-8d2dc8a7d8b5", 00:10:26.603 "assigned_rate_limits": { 00:10:26.603 "rw_ios_per_sec": 0, 00:10:26.603 "rw_mbytes_per_sec": 0, 00:10:26.603 "r_mbytes_per_sec": 0, 00:10:26.603 "w_mbytes_per_sec": 0 00:10:26.603 }, 00:10:26.603 "claimed": false, 00:10:26.603 "zoned": false, 00:10:26.603 "supported_io_types": { 00:10:26.603 "read": true, 00:10:26.603 "write": true, 00:10:26.603 "unmap": true, 00:10:26.603 "flush": true, 00:10:26.603 "reset": true, 00:10:26.603 "nvme_admin": false, 00:10:26.603 "nvme_io": false, 00:10:26.603 "nvme_io_md": false, 00:10:26.603 "write_zeroes": true, 00:10:26.603 "zcopy": false, 00:10:26.603 "get_zone_info": false, 00:10:26.603 "zone_management": false, 00:10:26.603 "zone_append": false, 00:10:26.603 "compare": false, 00:10:26.603 "compare_and_write": false, 00:10:26.603 "abort": false, 00:10:26.603 "seek_hole": false, 00:10:26.603 "seek_data": false, 00:10:26.603 "copy": false, 00:10:26.603 "nvme_iov_md": false 00:10:26.603 }, 00:10:26.603 "memory_domains": [ 00:10:26.603 { 00:10:26.603 "dma_device_id": "system", 00:10:26.603 "dma_device_type": 1 00:10:26.603 }, 00:10:26.603 { 00:10:26.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.603 "dma_device_type": 2 00:10:26.603 }, 00:10:26.603 { 00:10:26.603 "dma_device_id": "system", 00:10:26.603 "dma_device_type": 1 00:10:26.603 }, 00:10:26.603 { 00:10:26.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.603 "dma_device_type": 2 00:10:26.603 }, 00:10:26.603 { 00:10:26.603 "dma_device_id": "system", 00:10:26.603 "dma_device_type": 1 00:10:26.603 }, 00:10:26.603 { 00:10:26.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.603 "dma_device_type": 2 00:10:26.603 }, 00:10:26.603 { 00:10:26.603 "dma_device_id": "system", 00:10:26.603 "dma_device_type": 1 00:10:26.603 }, 00:10:26.603 { 00:10:26.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.603 "dma_device_type": 2 00:10:26.603 } 00:10:26.603 ], 00:10:26.603 "driver_specific": { 00:10:26.603 "raid": { 00:10:26.603 "uuid": "ec579f13-a5d9-4e65-a014-8d2dc8a7d8b5", 00:10:26.603 "strip_size_kb": 64, 00:10:26.603 "state": "online", 00:10:26.603 "raid_level": "raid0", 00:10:26.603 "superblock": true, 00:10:26.603 "num_base_bdevs": 4, 00:10:26.603 "num_base_bdevs_discovered": 4, 00:10:26.603 "num_base_bdevs_operational": 4, 00:10:26.603 "base_bdevs_list": [ 00:10:26.603 { 00:10:26.603 "name": "NewBaseBdev", 00:10:26.603 "uuid": "bb1856f6-2725-4b87-a643-a0e7c4f029c2", 00:10:26.603 "is_configured": true, 00:10:26.603 "data_offset": 2048, 00:10:26.603 "data_size": 63488 00:10:26.603 }, 00:10:26.603 { 00:10:26.603 "name": "BaseBdev2", 00:10:26.603 "uuid": "ab4c4c17-f380-4779-902e-8dd3d67f3442", 00:10:26.603 "is_configured": true, 00:10:26.603 "data_offset": 2048, 00:10:26.603 "data_size": 63488 00:10:26.603 }, 00:10:26.603 { 00:10:26.603 "name": "BaseBdev3", 00:10:26.603 "uuid": "7c97db69-c1d7-490e-aca2-99f2343d3de1", 00:10:26.603 "is_configured": true, 00:10:26.603 "data_offset": 2048, 00:10:26.603 "data_size": 63488 00:10:26.603 }, 00:10:26.603 { 00:10:26.603 "name": "BaseBdev4", 00:10:26.603 "uuid": "2385cadd-492f-4bd0-a2f4-e984ac6499fd", 00:10:26.603 "is_configured": true, 00:10:26.603 "data_offset": 2048, 00:10:26.603 "data_size": 63488 00:10:26.603 } 00:10:26.603 ] 00:10:26.603 } 00:10:26.603 } 00:10:26.603 }' 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:26.603 BaseBdev2 00:10:26.603 BaseBdev3 00:10:26.603 BaseBdev4' 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:26.603 11:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.604 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.604 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 11:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 11:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.863 11:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.863 11:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.863 11:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 11:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 [2024-11-27 11:48:53.027361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.863 [2024-11-27 11:48:53.027398] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.863 [2024-11-27 11:48:53.027515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.863 [2024-11-27 11:48:53.027607] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.863 [2024-11-27 11:48:53.027620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:26.863 11:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 11:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70051 00:10:26.863 11:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70051 ']' 00:10:26.863 11:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70051 00:10:26.863 11:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:26.863 11:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.863 11:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70051 00:10:26.863 11:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.863 11:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.863 11:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70051' 00:10:26.863 killing process with pid 70051 00:10:26.863 11:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70051 00:10:26.863 [2024-11-27 11:48:53.065254] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.863 11:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70051 00:10:27.122 [2024-11-27 11:48:53.486613] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.501 11:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:28.501 00:10:28.501 real 0m11.812s 00:10:28.501 user 0m18.865s 00:10:28.501 sys 0m2.033s 00:10:28.501 11:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.501 11:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.501 ************************************ 00:10:28.501 END TEST raid_state_function_test_sb 00:10:28.501 ************************************ 00:10:28.501 11:48:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:28.501 11:48:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:28.501 11:48:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.501 11:48:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:28.501 ************************************ 00:10:28.501 START TEST raid_superblock_test 00:10:28.501 ************************************ 00:10:28.501 11:48:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:28.501 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:28.501 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:28.501 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70722 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70722 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70722 ']' 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.502 11:48:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.502 [2024-11-27 11:48:54.825401] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:10:28.502 [2024-11-27 11:48:54.825523] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70722 ] 00:10:28.762 [2024-11-27 11:48:54.982084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.762 [2024-11-27 11:48:55.096025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.022 [2024-11-27 11:48:55.308954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.022 [2024-11-27 11:48:55.309021] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.590 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.590 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:29.590 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:29.590 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:29.590 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:29.590 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:29.590 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:29.590 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:29.590 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:29.590 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:29.590 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:29.590 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.590 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.590 malloc1 00:10:29.590 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.590 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:29.590 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.590 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.590 [2024-11-27 11:48:55.724168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:29.590 [2024-11-27 11:48:55.724256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.590 [2024-11-27 11:48:55.724283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:29.590 [2024-11-27 11:48:55.724293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.590 [2024-11-27 11:48:55.726595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.590 [2024-11-27 11:48:55.726635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:29.590 pt1 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.591 malloc2 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.591 [2024-11-27 11:48:55.780087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:29.591 [2024-11-27 11:48:55.780152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.591 [2024-11-27 11:48:55.780185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:29.591 [2024-11-27 11:48:55.780196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.591 [2024-11-27 11:48:55.782436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.591 [2024-11-27 11:48:55.782477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:29.591 pt2 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.591 malloc3 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.591 [2024-11-27 11:48:55.848230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:29.591 [2024-11-27 11:48:55.848294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.591 [2024-11-27 11:48:55.848322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:29.591 [2024-11-27 11:48:55.848333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.591 [2024-11-27 11:48:55.850768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.591 [2024-11-27 11:48:55.850810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:29.591 pt3 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.591 malloc4 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.591 [2024-11-27 11:48:55.905991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:29.591 [2024-11-27 11:48:55.906055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.591 [2024-11-27 11:48:55.906077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:29.591 [2024-11-27 11:48:55.906086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.591 [2024-11-27 11:48:55.908326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.591 [2024-11-27 11:48:55.908365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:29.591 pt4 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.591 [2024-11-27 11:48:55.918012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:29.591 [2024-11-27 11:48:55.919909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:29.591 [2024-11-27 11:48:55.919998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:29.591 [2024-11-27 11:48:55.920047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:29.591 [2024-11-27 11:48:55.920221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:29.591 [2024-11-27 11:48:55.920257] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:29.591 [2024-11-27 11:48:55.920556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:29.591 [2024-11-27 11:48:55.920743] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:29.591 [2024-11-27 11:48:55.920766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:29.591 [2024-11-27 11:48:55.920940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.591 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.849 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.849 "name": "raid_bdev1", 00:10:29.849 "uuid": "9a8e3ab5-bf17-4dfa-86d9-3833211ad7a1", 00:10:29.849 "strip_size_kb": 64, 00:10:29.849 "state": "online", 00:10:29.849 "raid_level": "raid0", 00:10:29.849 "superblock": true, 00:10:29.849 "num_base_bdevs": 4, 00:10:29.849 "num_base_bdevs_discovered": 4, 00:10:29.849 "num_base_bdevs_operational": 4, 00:10:29.849 "base_bdevs_list": [ 00:10:29.849 { 00:10:29.849 "name": "pt1", 00:10:29.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:29.849 "is_configured": true, 00:10:29.849 "data_offset": 2048, 00:10:29.849 "data_size": 63488 00:10:29.849 }, 00:10:29.849 { 00:10:29.849 "name": "pt2", 00:10:29.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:29.849 "is_configured": true, 00:10:29.849 "data_offset": 2048, 00:10:29.849 "data_size": 63488 00:10:29.849 }, 00:10:29.849 { 00:10:29.849 "name": "pt3", 00:10:29.849 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:29.849 "is_configured": true, 00:10:29.849 "data_offset": 2048, 00:10:29.849 "data_size": 63488 00:10:29.849 }, 00:10:29.849 { 00:10:29.849 "name": "pt4", 00:10:29.849 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:29.849 "is_configured": true, 00:10:29.849 "data_offset": 2048, 00:10:29.849 "data_size": 63488 00:10:29.849 } 00:10:29.849 ] 00:10:29.849 }' 00:10:29.849 11:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.849 11:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.106 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:30.106 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:30.106 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:30.106 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:30.106 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:30.106 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:30.106 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:30.106 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:30.106 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.106 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.106 [2024-11-27 11:48:56.405585] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.106 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.106 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:30.106 "name": "raid_bdev1", 00:10:30.106 "aliases": [ 00:10:30.106 "9a8e3ab5-bf17-4dfa-86d9-3833211ad7a1" 00:10:30.106 ], 00:10:30.106 "product_name": "Raid Volume", 00:10:30.106 "block_size": 512, 00:10:30.106 "num_blocks": 253952, 00:10:30.106 "uuid": "9a8e3ab5-bf17-4dfa-86d9-3833211ad7a1", 00:10:30.106 "assigned_rate_limits": { 00:10:30.106 "rw_ios_per_sec": 0, 00:10:30.106 "rw_mbytes_per_sec": 0, 00:10:30.106 "r_mbytes_per_sec": 0, 00:10:30.106 "w_mbytes_per_sec": 0 00:10:30.106 }, 00:10:30.106 "claimed": false, 00:10:30.106 "zoned": false, 00:10:30.106 "supported_io_types": { 00:10:30.106 "read": true, 00:10:30.106 "write": true, 00:10:30.106 "unmap": true, 00:10:30.106 "flush": true, 00:10:30.106 "reset": true, 00:10:30.106 "nvme_admin": false, 00:10:30.106 "nvme_io": false, 00:10:30.106 "nvme_io_md": false, 00:10:30.106 "write_zeroes": true, 00:10:30.106 "zcopy": false, 00:10:30.106 "get_zone_info": false, 00:10:30.106 "zone_management": false, 00:10:30.106 "zone_append": false, 00:10:30.106 "compare": false, 00:10:30.106 "compare_and_write": false, 00:10:30.106 "abort": false, 00:10:30.106 "seek_hole": false, 00:10:30.106 "seek_data": false, 00:10:30.106 "copy": false, 00:10:30.106 "nvme_iov_md": false 00:10:30.106 }, 00:10:30.106 "memory_domains": [ 00:10:30.106 { 00:10:30.106 "dma_device_id": "system", 00:10:30.106 "dma_device_type": 1 00:10:30.106 }, 00:10:30.106 { 00:10:30.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.106 "dma_device_type": 2 00:10:30.106 }, 00:10:30.106 { 00:10:30.106 "dma_device_id": "system", 00:10:30.106 "dma_device_type": 1 00:10:30.106 }, 00:10:30.106 { 00:10:30.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.106 "dma_device_type": 2 00:10:30.106 }, 00:10:30.106 { 00:10:30.106 "dma_device_id": "system", 00:10:30.106 "dma_device_type": 1 00:10:30.106 }, 00:10:30.106 { 00:10:30.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.106 "dma_device_type": 2 00:10:30.106 }, 00:10:30.106 { 00:10:30.106 "dma_device_id": "system", 00:10:30.106 "dma_device_type": 1 00:10:30.106 }, 00:10:30.106 { 00:10:30.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.106 "dma_device_type": 2 00:10:30.106 } 00:10:30.106 ], 00:10:30.106 "driver_specific": { 00:10:30.106 "raid": { 00:10:30.106 "uuid": "9a8e3ab5-bf17-4dfa-86d9-3833211ad7a1", 00:10:30.106 "strip_size_kb": 64, 00:10:30.106 "state": "online", 00:10:30.106 "raid_level": "raid0", 00:10:30.106 "superblock": true, 00:10:30.106 "num_base_bdevs": 4, 00:10:30.106 "num_base_bdevs_discovered": 4, 00:10:30.106 "num_base_bdevs_operational": 4, 00:10:30.106 "base_bdevs_list": [ 00:10:30.106 { 00:10:30.106 "name": "pt1", 00:10:30.106 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:30.106 "is_configured": true, 00:10:30.106 "data_offset": 2048, 00:10:30.106 "data_size": 63488 00:10:30.106 }, 00:10:30.106 { 00:10:30.106 "name": "pt2", 00:10:30.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.106 "is_configured": true, 00:10:30.106 "data_offset": 2048, 00:10:30.106 "data_size": 63488 00:10:30.106 }, 00:10:30.106 { 00:10:30.106 "name": "pt3", 00:10:30.106 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:30.106 "is_configured": true, 00:10:30.106 "data_offset": 2048, 00:10:30.106 "data_size": 63488 00:10:30.106 }, 00:10:30.106 { 00:10:30.106 "name": "pt4", 00:10:30.106 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:30.106 "is_configured": true, 00:10:30.106 "data_offset": 2048, 00:10:30.106 "data_size": 63488 00:10:30.106 } 00:10:30.106 ] 00:10:30.106 } 00:10:30.106 } 00:10:30.106 }' 00:10:30.106 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:30.364 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:30.364 pt2 00:10:30.364 pt3 00:10:30.364 pt4' 00:10:30.364 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.364 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:30.364 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.364 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:30.364 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.364 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.364 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.364 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.364 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.364 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.364 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.364 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:30.364 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.364 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.364 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.364 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.364 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:30.365 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.365 [2024-11-27 11:48:56.729057] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.623 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.623 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9a8e3ab5-bf17-4dfa-86d9-3833211ad7a1 00:10:30.623 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9a8e3ab5-bf17-4dfa-86d9-3833211ad7a1 ']' 00:10:30.623 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:30.623 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.623 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.623 [2024-11-27 11:48:56.768596] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.623 [2024-11-27 11:48:56.768629] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.623 [2024-11-27 11:48:56.768738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.623 [2024-11-27 11:48:56.768808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.623 [2024-11-27 11:48:56.768822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.624 [2024-11-27 11:48:56.916383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:30.624 [2024-11-27 11:48:56.918444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:30.624 [2024-11-27 11:48:56.918502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:30.624 [2024-11-27 11:48:56.918541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:30.624 [2024-11-27 11:48:56.918596] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:30.624 [2024-11-27 11:48:56.918651] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:30.624 [2024-11-27 11:48:56.918672] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:30.624 [2024-11-27 11:48:56.918694] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:30.624 [2024-11-27 11:48:56.918708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.624 [2024-11-27 11:48:56.918722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:30.624 request: 00:10:30.624 { 00:10:30.624 "name": "raid_bdev1", 00:10:30.624 "raid_level": "raid0", 00:10:30.624 "base_bdevs": [ 00:10:30.624 "malloc1", 00:10:30.624 "malloc2", 00:10:30.624 "malloc3", 00:10:30.624 "malloc4" 00:10:30.624 ], 00:10:30.624 "strip_size_kb": 64, 00:10:30.624 "superblock": false, 00:10:30.624 "method": "bdev_raid_create", 00:10:30.624 "req_id": 1 00:10:30.624 } 00:10:30.624 Got JSON-RPC error response 00:10:30.624 response: 00:10:30.624 { 00:10:30.624 "code": -17, 00:10:30.624 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:30.624 } 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.624 [2024-11-27 11:48:56.972241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:30.624 [2024-11-27 11:48:56.972312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.624 [2024-11-27 11:48:56.972336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:30.624 [2024-11-27 11:48:56.972349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.624 [2024-11-27 11:48:56.974763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.624 [2024-11-27 11:48:56.974810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:30.624 [2024-11-27 11:48:56.974921] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:30.624 [2024-11-27 11:48:56.974984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:30.624 pt1 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.624 11:48:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.883 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.883 "name": "raid_bdev1", 00:10:30.883 "uuid": "9a8e3ab5-bf17-4dfa-86d9-3833211ad7a1", 00:10:30.883 "strip_size_kb": 64, 00:10:30.883 "state": "configuring", 00:10:30.883 "raid_level": "raid0", 00:10:30.883 "superblock": true, 00:10:30.883 "num_base_bdevs": 4, 00:10:30.883 "num_base_bdevs_discovered": 1, 00:10:30.883 "num_base_bdevs_operational": 4, 00:10:30.883 "base_bdevs_list": [ 00:10:30.883 { 00:10:30.883 "name": "pt1", 00:10:30.883 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:30.883 "is_configured": true, 00:10:30.883 "data_offset": 2048, 00:10:30.883 "data_size": 63488 00:10:30.883 }, 00:10:30.883 { 00:10:30.883 "name": null, 00:10:30.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.883 "is_configured": false, 00:10:30.883 "data_offset": 2048, 00:10:30.883 "data_size": 63488 00:10:30.883 }, 00:10:30.883 { 00:10:30.883 "name": null, 00:10:30.883 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:30.883 "is_configured": false, 00:10:30.883 "data_offset": 2048, 00:10:30.883 "data_size": 63488 00:10:30.883 }, 00:10:30.883 { 00:10:30.883 "name": null, 00:10:30.883 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:30.883 "is_configured": false, 00:10:30.883 "data_offset": 2048, 00:10:30.883 "data_size": 63488 00:10:30.883 } 00:10:30.883 ] 00:10:30.883 }' 00:10:30.883 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.883 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.142 [2024-11-27 11:48:57.435565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:31.142 [2024-11-27 11:48:57.435660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.142 [2024-11-27 11:48:57.435685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:31.142 [2024-11-27 11:48:57.435698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.142 [2024-11-27 11:48:57.436237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.142 [2024-11-27 11:48:57.436270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:31.142 [2024-11-27 11:48:57.436372] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:31.142 [2024-11-27 11:48:57.436408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:31.142 pt2 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.142 [2024-11-27 11:48:57.447539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.142 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.143 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.143 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.143 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.143 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.143 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.143 "name": "raid_bdev1", 00:10:31.143 "uuid": "9a8e3ab5-bf17-4dfa-86d9-3833211ad7a1", 00:10:31.143 "strip_size_kb": 64, 00:10:31.143 "state": "configuring", 00:10:31.143 "raid_level": "raid0", 00:10:31.143 "superblock": true, 00:10:31.143 "num_base_bdevs": 4, 00:10:31.143 "num_base_bdevs_discovered": 1, 00:10:31.143 "num_base_bdevs_operational": 4, 00:10:31.143 "base_bdevs_list": [ 00:10:31.143 { 00:10:31.143 "name": "pt1", 00:10:31.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:31.143 "is_configured": true, 00:10:31.143 "data_offset": 2048, 00:10:31.143 "data_size": 63488 00:10:31.143 }, 00:10:31.143 { 00:10:31.143 "name": null, 00:10:31.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:31.143 "is_configured": false, 00:10:31.143 "data_offset": 0, 00:10:31.143 "data_size": 63488 00:10:31.143 }, 00:10:31.143 { 00:10:31.143 "name": null, 00:10:31.143 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:31.143 "is_configured": false, 00:10:31.143 "data_offset": 2048, 00:10:31.143 "data_size": 63488 00:10:31.143 }, 00:10:31.143 { 00:10:31.143 "name": null, 00:10:31.143 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:31.143 "is_configured": false, 00:10:31.143 "data_offset": 2048, 00:10:31.143 "data_size": 63488 00:10:31.143 } 00:10:31.143 ] 00:10:31.143 }' 00:10:31.143 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.143 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.710 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:31.710 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:31.710 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:31.710 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.710 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.710 [2024-11-27 11:48:57.894778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:31.710 [2024-11-27 11:48:57.894873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.710 [2024-11-27 11:48:57.894900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:31.710 [2024-11-27 11:48:57.894911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.710 [2024-11-27 11:48:57.895403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.710 [2024-11-27 11:48:57.895439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:31.710 [2024-11-27 11:48:57.895530] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:31.711 [2024-11-27 11:48:57.895567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:31.711 pt2 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.711 [2024-11-27 11:48:57.906717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:31.711 [2024-11-27 11:48:57.906788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.711 [2024-11-27 11:48:57.906808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:31.711 [2024-11-27 11:48:57.906816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.711 [2024-11-27 11:48:57.907242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.711 [2024-11-27 11:48:57.907269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:31.711 [2024-11-27 11:48:57.907345] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:31.711 [2024-11-27 11:48:57.907374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:31.711 pt3 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.711 [2024-11-27 11:48:57.918664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:31.711 [2024-11-27 11:48:57.918710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.711 [2024-11-27 11:48:57.918727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:31.711 [2024-11-27 11:48:57.918735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.711 [2024-11-27 11:48:57.919131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.711 [2024-11-27 11:48:57.919154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:31.711 [2024-11-27 11:48:57.919243] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:31.711 [2024-11-27 11:48:57.919267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:31.711 [2024-11-27 11:48:57.919416] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:31.711 [2024-11-27 11:48:57.919441] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:31.711 [2024-11-27 11:48:57.919704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:31.711 [2024-11-27 11:48:57.919905] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:31.711 [2024-11-27 11:48:57.919927] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:31.711 [2024-11-27 11:48:57.920071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.711 pt4 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.711 "name": "raid_bdev1", 00:10:31.711 "uuid": "9a8e3ab5-bf17-4dfa-86d9-3833211ad7a1", 00:10:31.711 "strip_size_kb": 64, 00:10:31.711 "state": "online", 00:10:31.711 "raid_level": "raid0", 00:10:31.711 "superblock": true, 00:10:31.711 "num_base_bdevs": 4, 00:10:31.711 "num_base_bdevs_discovered": 4, 00:10:31.711 "num_base_bdevs_operational": 4, 00:10:31.711 "base_bdevs_list": [ 00:10:31.711 { 00:10:31.711 "name": "pt1", 00:10:31.711 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:31.711 "is_configured": true, 00:10:31.711 "data_offset": 2048, 00:10:31.711 "data_size": 63488 00:10:31.711 }, 00:10:31.711 { 00:10:31.711 "name": "pt2", 00:10:31.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:31.711 "is_configured": true, 00:10:31.711 "data_offset": 2048, 00:10:31.711 "data_size": 63488 00:10:31.711 }, 00:10:31.711 { 00:10:31.711 "name": "pt3", 00:10:31.711 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:31.711 "is_configured": true, 00:10:31.711 "data_offset": 2048, 00:10:31.711 "data_size": 63488 00:10:31.711 }, 00:10:31.711 { 00:10:31.711 "name": "pt4", 00:10:31.711 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:31.711 "is_configured": true, 00:10:31.711 "data_offset": 2048, 00:10:31.711 "data_size": 63488 00:10:31.711 } 00:10:31.711 ] 00:10:31.711 }' 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.711 11:48:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:32.278 [2024-11-27 11:48:58.422245] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:32.278 "name": "raid_bdev1", 00:10:32.278 "aliases": [ 00:10:32.278 "9a8e3ab5-bf17-4dfa-86d9-3833211ad7a1" 00:10:32.278 ], 00:10:32.278 "product_name": "Raid Volume", 00:10:32.278 "block_size": 512, 00:10:32.278 "num_blocks": 253952, 00:10:32.278 "uuid": "9a8e3ab5-bf17-4dfa-86d9-3833211ad7a1", 00:10:32.278 "assigned_rate_limits": { 00:10:32.278 "rw_ios_per_sec": 0, 00:10:32.278 "rw_mbytes_per_sec": 0, 00:10:32.278 "r_mbytes_per_sec": 0, 00:10:32.278 "w_mbytes_per_sec": 0 00:10:32.278 }, 00:10:32.278 "claimed": false, 00:10:32.278 "zoned": false, 00:10:32.278 "supported_io_types": { 00:10:32.278 "read": true, 00:10:32.278 "write": true, 00:10:32.278 "unmap": true, 00:10:32.278 "flush": true, 00:10:32.278 "reset": true, 00:10:32.278 "nvme_admin": false, 00:10:32.278 "nvme_io": false, 00:10:32.278 "nvme_io_md": false, 00:10:32.278 "write_zeroes": true, 00:10:32.278 "zcopy": false, 00:10:32.278 "get_zone_info": false, 00:10:32.278 "zone_management": false, 00:10:32.278 "zone_append": false, 00:10:32.278 "compare": false, 00:10:32.278 "compare_and_write": false, 00:10:32.278 "abort": false, 00:10:32.278 "seek_hole": false, 00:10:32.278 "seek_data": false, 00:10:32.278 "copy": false, 00:10:32.278 "nvme_iov_md": false 00:10:32.278 }, 00:10:32.278 "memory_domains": [ 00:10:32.278 { 00:10:32.278 "dma_device_id": "system", 00:10:32.278 "dma_device_type": 1 00:10:32.278 }, 00:10:32.278 { 00:10:32.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.278 "dma_device_type": 2 00:10:32.278 }, 00:10:32.278 { 00:10:32.278 "dma_device_id": "system", 00:10:32.278 "dma_device_type": 1 00:10:32.278 }, 00:10:32.278 { 00:10:32.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.278 "dma_device_type": 2 00:10:32.278 }, 00:10:32.278 { 00:10:32.278 "dma_device_id": "system", 00:10:32.278 "dma_device_type": 1 00:10:32.278 }, 00:10:32.278 { 00:10:32.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.278 "dma_device_type": 2 00:10:32.278 }, 00:10:32.278 { 00:10:32.278 "dma_device_id": "system", 00:10:32.278 "dma_device_type": 1 00:10:32.278 }, 00:10:32.278 { 00:10:32.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.278 "dma_device_type": 2 00:10:32.278 } 00:10:32.278 ], 00:10:32.278 "driver_specific": { 00:10:32.278 "raid": { 00:10:32.278 "uuid": "9a8e3ab5-bf17-4dfa-86d9-3833211ad7a1", 00:10:32.278 "strip_size_kb": 64, 00:10:32.278 "state": "online", 00:10:32.278 "raid_level": "raid0", 00:10:32.278 "superblock": true, 00:10:32.278 "num_base_bdevs": 4, 00:10:32.278 "num_base_bdevs_discovered": 4, 00:10:32.278 "num_base_bdevs_operational": 4, 00:10:32.278 "base_bdevs_list": [ 00:10:32.278 { 00:10:32.278 "name": "pt1", 00:10:32.278 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:32.278 "is_configured": true, 00:10:32.278 "data_offset": 2048, 00:10:32.278 "data_size": 63488 00:10:32.278 }, 00:10:32.278 { 00:10:32.278 "name": "pt2", 00:10:32.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:32.278 "is_configured": true, 00:10:32.278 "data_offset": 2048, 00:10:32.278 "data_size": 63488 00:10:32.278 }, 00:10:32.278 { 00:10:32.278 "name": "pt3", 00:10:32.278 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:32.278 "is_configured": true, 00:10:32.278 "data_offset": 2048, 00:10:32.278 "data_size": 63488 00:10:32.278 }, 00:10:32.278 { 00:10:32.278 "name": "pt4", 00:10:32.278 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:32.278 "is_configured": true, 00:10:32.278 "data_offset": 2048, 00:10:32.278 "data_size": 63488 00:10:32.278 } 00:10:32.278 ] 00:10:32.278 } 00:10:32.278 } 00:10:32.278 }' 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:32.278 pt2 00:10:32.278 pt3 00:10:32.278 pt4' 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.278 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.279 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.279 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.538 [2024-11-27 11:48:58.737654] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9a8e3ab5-bf17-4dfa-86d9-3833211ad7a1 '!=' 9a8e3ab5-bf17-4dfa-86d9-3833211ad7a1 ']' 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70722 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70722 ']' 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70722 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70722 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:32.538 killing process with pid 70722 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70722' 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70722 00:10:32.538 [2024-11-27 11:48:58.820898] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:32.538 [2024-11-27 11:48:58.821005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.538 11:48:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70722 00:10:32.538 [2024-11-27 11:48:58.821085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.538 [2024-11-27 11:48:58.821096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:33.106 [2024-11-27 11:48:59.242086] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.043 11:49:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:34.043 00:10:34.043 real 0m5.687s 00:10:34.043 user 0m8.185s 00:10:34.043 sys 0m0.955s 00:10:34.043 11:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.043 11:49:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.043 ************************************ 00:10:34.043 END TEST raid_superblock_test 00:10:34.043 ************************************ 00:10:34.303 11:49:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:34.303 11:49:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:34.303 11:49:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.303 11:49:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.303 ************************************ 00:10:34.303 START TEST raid_read_error_test 00:10:34.303 ************************************ 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WGASBq1tPE 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70986 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70986 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70986 ']' 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.303 11:49:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.303 [2024-11-27 11:49:00.586959] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:10:34.303 [2024-11-27 11:49:00.587075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70986 ] 00:10:34.568 [2024-11-27 11:49:00.760008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.568 [2024-11-27 11:49:00.873920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.833 [2024-11-27 11:49:01.079156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.833 [2024-11-27 11:49:01.079192] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.092 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.092 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:35.092 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.092 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:35.092 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.092 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.092 BaseBdev1_malloc 00:10:35.092 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.092 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:35.092 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.092 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.092 true 00:10:35.092 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.092 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:35.092 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.092 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.092 [2024-11-27 11:49:01.474688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:35.092 [2024-11-27 11:49:01.474748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.092 [2024-11-27 11:49:01.474769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:35.092 [2024-11-27 11:49:01.474781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.352 [2024-11-27 11:49:01.477141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.352 [2024-11-27 11:49:01.477185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:35.352 BaseBdev1 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.352 BaseBdev2_malloc 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.352 true 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.352 [2024-11-27 11:49:01.542867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:35.352 [2024-11-27 11:49:01.542921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.352 [2024-11-27 11:49:01.542937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:35.352 [2024-11-27 11:49:01.542948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.352 [2024-11-27 11:49:01.545180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.352 [2024-11-27 11:49:01.545220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:35.352 BaseBdev2 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.352 BaseBdev3_malloc 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.352 true 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.352 [2024-11-27 11:49:01.622718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:35.352 [2024-11-27 11:49:01.622774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.352 [2024-11-27 11:49:01.622791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:35.352 [2024-11-27 11:49:01.622801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.352 [2024-11-27 11:49:01.625005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.352 [2024-11-27 11:49:01.625043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:35.352 BaseBdev3 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.352 BaseBdev4_malloc 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.352 true 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.352 [2024-11-27 11:49:01.691379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:35.352 [2024-11-27 11:49:01.691454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.352 [2024-11-27 11:49:01.691478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:35.352 [2024-11-27 11:49:01.691490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.352 [2024-11-27 11:49:01.694003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.352 [2024-11-27 11:49:01.694050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:35.352 BaseBdev4 00:10:35.352 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.353 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:35.353 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.353 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.353 [2024-11-27 11:49:01.703412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.353 [2024-11-27 11:49:01.705402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.353 [2024-11-27 11:49:01.705482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.353 [2024-11-27 11:49:01.705544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:35.353 [2024-11-27 11:49:01.705762] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:35.353 [2024-11-27 11:49:01.705787] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:35.353 [2024-11-27 11:49:01.706098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:35.353 [2024-11-27 11:49:01.706292] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:35.353 [2024-11-27 11:49:01.706312] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:35.353 [2024-11-27 11:49:01.706505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.353 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.353 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:35.353 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.353 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.353 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.353 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.353 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.353 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.353 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.353 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.353 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.353 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.353 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.353 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.353 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.611 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.611 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.611 "name": "raid_bdev1", 00:10:35.611 "uuid": "a79dea90-4550-494a-a9ba-c4995ec9bfd9", 00:10:35.611 "strip_size_kb": 64, 00:10:35.611 "state": "online", 00:10:35.611 "raid_level": "raid0", 00:10:35.611 "superblock": true, 00:10:35.611 "num_base_bdevs": 4, 00:10:35.611 "num_base_bdevs_discovered": 4, 00:10:35.611 "num_base_bdevs_operational": 4, 00:10:35.611 "base_bdevs_list": [ 00:10:35.611 { 00:10:35.611 "name": "BaseBdev1", 00:10:35.611 "uuid": "bd3e5977-1551-5974-9d84-05adf3e91078", 00:10:35.611 "is_configured": true, 00:10:35.611 "data_offset": 2048, 00:10:35.611 "data_size": 63488 00:10:35.611 }, 00:10:35.611 { 00:10:35.611 "name": "BaseBdev2", 00:10:35.611 "uuid": "7ccb458b-9695-5abb-b923-6f49887ed5d6", 00:10:35.611 "is_configured": true, 00:10:35.611 "data_offset": 2048, 00:10:35.611 "data_size": 63488 00:10:35.611 }, 00:10:35.611 { 00:10:35.611 "name": "BaseBdev3", 00:10:35.611 "uuid": "b4a933ec-b274-5461-88b6-c603a5d3dc4e", 00:10:35.611 "is_configured": true, 00:10:35.611 "data_offset": 2048, 00:10:35.611 "data_size": 63488 00:10:35.611 }, 00:10:35.611 { 00:10:35.611 "name": "BaseBdev4", 00:10:35.611 "uuid": "213f1640-f0c1-530a-a2e5-c0f4a1082982", 00:10:35.611 "is_configured": true, 00:10:35.611 "data_offset": 2048, 00:10:35.611 "data_size": 63488 00:10:35.611 } 00:10:35.611 ] 00:10:35.611 }' 00:10:35.611 11:49:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.611 11:49:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.870 11:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:35.870 11:49:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:36.129 [2024-11-27 11:49:02.267589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.063 "name": "raid_bdev1", 00:10:37.063 "uuid": "a79dea90-4550-494a-a9ba-c4995ec9bfd9", 00:10:37.063 "strip_size_kb": 64, 00:10:37.063 "state": "online", 00:10:37.063 "raid_level": "raid0", 00:10:37.063 "superblock": true, 00:10:37.063 "num_base_bdevs": 4, 00:10:37.063 "num_base_bdevs_discovered": 4, 00:10:37.063 "num_base_bdevs_operational": 4, 00:10:37.063 "base_bdevs_list": [ 00:10:37.063 { 00:10:37.063 "name": "BaseBdev1", 00:10:37.063 "uuid": "bd3e5977-1551-5974-9d84-05adf3e91078", 00:10:37.063 "is_configured": true, 00:10:37.063 "data_offset": 2048, 00:10:37.063 "data_size": 63488 00:10:37.063 }, 00:10:37.063 { 00:10:37.063 "name": "BaseBdev2", 00:10:37.063 "uuid": "7ccb458b-9695-5abb-b923-6f49887ed5d6", 00:10:37.063 "is_configured": true, 00:10:37.063 "data_offset": 2048, 00:10:37.063 "data_size": 63488 00:10:37.063 }, 00:10:37.063 { 00:10:37.063 "name": "BaseBdev3", 00:10:37.063 "uuid": "b4a933ec-b274-5461-88b6-c603a5d3dc4e", 00:10:37.063 "is_configured": true, 00:10:37.063 "data_offset": 2048, 00:10:37.063 "data_size": 63488 00:10:37.063 }, 00:10:37.063 { 00:10:37.063 "name": "BaseBdev4", 00:10:37.063 "uuid": "213f1640-f0c1-530a-a2e5-c0f4a1082982", 00:10:37.063 "is_configured": true, 00:10:37.063 "data_offset": 2048, 00:10:37.063 "data_size": 63488 00:10:37.063 } 00:10:37.063 ] 00:10:37.063 }' 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.063 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.322 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:37.322 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.322 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.322 [2024-11-27 11:49:03.652303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:37.322 [2024-11-27 11:49:03.652342] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.322 [2024-11-27 11:49:03.655032] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.322 [2024-11-27 11:49:03.655094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.322 [2024-11-27 11:49:03.655137] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:37.322 [2024-11-27 11:49:03.655149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:37.322 { 00:10:37.322 "results": [ 00:10:37.322 { 00:10:37.322 "job": "raid_bdev1", 00:10:37.322 "core_mask": "0x1", 00:10:37.322 "workload": "randrw", 00:10:37.322 "percentage": 50, 00:10:37.322 "status": "finished", 00:10:37.322 "queue_depth": 1, 00:10:37.322 "io_size": 131072, 00:10:37.322 "runtime": 1.385481, 00:10:37.322 "iops": 14343.03321373588, 00:10:37.322 "mibps": 1792.879151716985, 00:10:37.322 "io_failed": 1, 00:10:37.322 "io_timeout": 0, 00:10:37.322 "avg_latency_us": 96.56681745678948, 00:10:37.322 "min_latency_us": 28.28296943231441, 00:10:37.322 "max_latency_us": 1631.2454148471616 00:10:37.322 } 00:10:37.322 ], 00:10:37.322 "core_count": 1 00:10:37.322 } 00:10:37.322 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.322 11:49:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70986 00:10:37.322 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70986 ']' 00:10:37.322 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70986 00:10:37.322 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:37.322 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.322 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70986 00:10:37.322 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.322 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.322 killing process with pid 70986 00:10:37.322 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70986' 00:10:37.322 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70986 00:10:37.322 [2024-11-27 11:49:03.699084] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:37.322 11:49:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70986 00:10:37.890 [2024-11-27 11:49:04.034749] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:39.270 11:49:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:39.270 11:49:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:39.270 11:49:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WGASBq1tPE 00:10:39.270 11:49:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:39.270 11:49:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:39.270 11:49:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:39.270 11:49:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:39.270 11:49:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:39.270 00:10:39.270 real 0m4.786s 00:10:39.270 user 0m5.681s 00:10:39.270 sys 0m0.570s 00:10:39.270 11:49:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.270 11:49:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.270 ************************************ 00:10:39.270 END TEST raid_read_error_test 00:10:39.270 ************************************ 00:10:39.270 11:49:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:39.270 11:49:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:39.270 11:49:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.270 11:49:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:39.270 ************************************ 00:10:39.270 START TEST raid_write_error_test 00:10:39.270 ************************************ 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NQf3QhxNNp 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71132 00:10:39.270 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:39.271 11:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71132 00:10:39.271 11:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71132 ']' 00:10:39.271 11:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.271 11:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.271 11:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.271 11:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.271 11:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.271 [2024-11-27 11:49:05.439316] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:10:39.271 [2024-11-27 11:49:05.439455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71132 ] 00:10:39.271 [2024-11-27 11:49:05.596921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.529 [2024-11-27 11:49:05.717195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.786 [2024-11-27 11:49:05.920421] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.786 [2024-11-27 11:49:05.920489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.044 BaseBdev1_malloc 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.044 true 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.044 [2024-11-27 11:49:06.347258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:40.044 [2024-11-27 11:49:06.347316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.044 [2024-11-27 11:49:06.347335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:40.044 [2024-11-27 11:49:06.347347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.044 [2024-11-27 11:49:06.349453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.044 [2024-11-27 11:49:06.349494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:40.044 BaseBdev1 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.044 BaseBdev2_malloc 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.044 true 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.044 [2024-11-27 11:49:06.411481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:40.044 [2024-11-27 11:49:06.411541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.044 [2024-11-27 11:49:06.411559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:40.044 [2024-11-27 11:49:06.411569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.044 [2024-11-27 11:49:06.413908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.044 [2024-11-27 11:49:06.413948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:40.044 BaseBdev2 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.044 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.303 BaseBdev3_malloc 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.303 true 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.303 [2024-11-27 11:49:06.491716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:40.303 [2024-11-27 11:49:06.491795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.303 [2024-11-27 11:49:06.491823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:40.303 [2024-11-27 11:49:06.491847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.303 [2024-11-27 11:49:06.494291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.303 [2024-11-27 11:49:06.494338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:40.303 BaseBdev3 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.303 BaseBdev4_malloc 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.303 true 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.303 [2024-11-27 11:49:06.560808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:40.303 [2024-11-27 11:49:06.560899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.303 [2024-11-27 11:49:06.560930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:40.303 [2024-11-27 11:49:06.560940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.303 [2024-11-27 11:49:06.563008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.303 [2024-11-27 11:49:06.563048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:40.303 BaseBdev4 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.303 [2024-11-27 11:49:06.572875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.303 [2024-11-27 11:49:06.574748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.303 [2024-11-27 11:49:06.574826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.303 [2024-11-27 11:49:06.574898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:40.303 [2024-11-27 11:49:06.575115] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:40.303 [2024-11-27 11:49:06.575139] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:40.303 [2024-11-27 11:49:06.575429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:40.303 [2024-11-27 11:49:06.575620] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:40.303 [2024-11-27 11:49:06.575639] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:40.303 [2024-11-27 11:49:06.575803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.303 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.303 "name": "raid_bdev1", 00:10:40.303 "uuid": "2ffb9922-c319-4e78-bbc6-9ca92872b50f", 00:10:40.303 "strip_size_kb": 64, 00:10:40.303 "state": "online", 00:10:40.303 "raid_level": "raid0", 00:10:40.303 "superblock": true, 00:10:40.303 "num_base_bdevs": 4, 00:10:40.303 "num_base_bdevs_discovered": 4, 00:10:40.303 "num_base_bdevs_operational": 4, 00:10:40.303 "base_bdevs_list": [ 00:10:40.303 { 00:10:40.303 "name": "BaseBdev1", 00:10:40.303 "uuid": "a0085add-28e5-5964-810b-eacd54ab293f", 00:10:40.303 "is_configured": true, 00:10:40.303 "data_offset": 2048, 00:10:40.303 "data_size": 63488 00:10:40.303 }, 00:10:40.303 { 00:10:40.303 "name": "BaseBdev2", 00:10:40.303 "uuid": "f8ad8942-1ac9-5bf7-b8ef-6c1bb02317d4", 00:10:40.303 "is_configured": true, 00:10:40.303 "data_offset": 2048, 00:10:40.304 "data_size": 63488 00:10:40.304 }, 00:10:40.304 { 00:10:40.304 "name": "BaseBdev3", 00:10:40.304 "uuid": "db53b879-c6e0-52bf-99a4-8619dab47b14", 00:10:40.304 "is_configured": true, 00:10:40.304 "data_offset": 2048, 00:10:40.304 "data_size": 63488 00:10:40.304 }, 00:10:40.304 { 00:10:40.304 "name": "BaseBdev4", 00:10:40.304 "uuid": "51348df2-86f2-5c57-8ec8-9b937304626d", 00:10:40.304 "is_configured": true, 00:10:40.304 "data_offset": 2048, 00:10:40.304 "data_size": 63488 00:10:40.304 } 00:10:40.304 ] 00:10:40.304 }' 00:10:40.304 11:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.304 11:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.870 11:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:40.870 11:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:40.870 [2024-11-27 11:49:07.133488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.804 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.804 "name": "raid_bdev1", 00:10:41.804 "uuid": "2ffb9922-c319-4e78-bbc6-9ca92872b50f", 00:10:41.804 "strip_size_kb": 64, 00:10:41.804 "state": "online", 00:10:41.804 "raid_level": "raid0", 00:10:41.804 "superblock": true, 00:10:41.804 "num_base_bdevs": 4, 00:10:41.804 "num_base_bdevs_discovered": 4, 00:10:41.804 "num_base_bdevs_operational": 4, 00:10:41.804 "base_bdevs_list": [ 00:10:41.804 { 00:10:41.804 "name": "BaseBdev1", 00:10:41.804 "uuid": "a0085add-28e5-5964-810b-eacd54ab293f", 00:10:41.804 "is_configured": true, 00:10:41.804 "data_offset": 2048, 00:10:41.805 "data_size": 63488 00:10:41.805 }, 00:10:41.805 { 00:10:41.805 "name": "BaseBdev2", 00:10:41.805 "uuid": "f8ad8942-1ac9-5bf7-b8ef-6c1bb02317d4", 00:10:41.805 "is_configured": true, 00:10:41.805 "data_offset": 2048, 00:10:41.805 "data_size": 63488 00:10:41.805 }, 00:10:41.805 { 00:10:41.805 "name": "BaseBdev3", 00:10:41.805 "uuid": "db53b879-c6e0-52bf-99a4-8619dab47b14", 00:10:41.805 "is_configured": true, 00:10:41.805 "data_offset": 2048, 00:10:41.805 "data_size": 63488 00:10:41.805 }, 00:10:41.805 { 00:10:41.805 "name": "BaseBdev4", 00:10:41.805 "uuid": "51348df2-86f2-5c57-8ec8-9b937304626d", 00:10:41.805 "is_configured": true, 00:10:41.805 "data_offset": 2048, 00:10:41.805 "data_size": 63488 00:10:41.805 } 00:10:41.805 ] 00:10:41.805 }' 00:10:41.805 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.805 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.371 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:42.371 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.371 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.371 [2024-11-27 11:49:08.518359] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:42.371 [2024-11-27 11:49:08.518397] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.371 [2024-11-27 11:49:08.521126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.371 [2024-11-27 11:49:08.521186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.371 [2024-11-27 11:49:08.521229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.371 [2024-11-27 11:49:08.521240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:42.371 { 00:10:42.371 "results": [ 00:10:42.371 { 00:10:42.371 "job": "raid_bdev1", 00:10:42.371 "core_mask": "0x1", 00:10:42.371 "workload": "randrw", 00:10:42.371 "percentage": 50, 00:10:42.371 "status": "finished", 00:10:42.371 "queue_depth": 1, 00:10:42.371 "io_size": 131072, 00:10:42.371 "runtime": 1.38563, 00:10:42.371 "iops": 14375.410463110642, 00:10:42.371 "mibps": 1796.9263078888303, 00:10:42.371 "io_failed": 1, 00:10:42.371 "io_timeout": 0, 00:10:42.371 "avg_latency_us": 96.27929639957209, 00:10:42.371 "min_latency_us": 26.829694323144103, 00:10:42.371 "max_latency_us": 1638.4 00:10:42.371 } 00:10:42.371 ], 00:10:42.371 "core_count": 1 00:10:42.371 } 00:10:42.371 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.371 11:49:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71132 00:10:42.371 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71132 ']' 00:10:42.371 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71132 00:10:42.371 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:42.371 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.371 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71132 00:10:42.372 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.372 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.372 killing process with pid 71132 00:10:42.372 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71132' 00:10:42.372 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71132 00:10:42.372 [2024-11-27 11:49:08.569021] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.372 11:49:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71132 00:10:42.630 [2024-11-27 11:49:08.898199] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:44.006 11:49:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NQf3QhxNNp 00:10:44.006 11:49:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:44.006 11:49:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:44.006 11:49:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:44.006 11:49:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:44.006 11:49:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:44.006 11:49:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:44.006 11:49:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:44.006 00:10:44.006 real 0m4.806s 00:10:44.006 user 0m5.685s 00:10:44.006 sys 0m0.597s 00:10:44.006 11:49:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.006 11:49:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.006 ************************************ 00:10:44.006 END TEST raid_write_error_test 00:10:44.006 ************************************ 00:10:44.006 11:49:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:44.006 11:49:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:44.006 11:49:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:44.006 11:49:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.006 11:49:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:44.006 ************************************ 00:10:44.006 START TEST raid_state_function_test 00:10:44.006 ************************************ 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71281 00:10:44.006 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:44.007 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71281' 00:10:44.007 Process raid pid: 71281 00:10:44.007 11:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71281 00:10:44.007 11:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71281 ']' 00:10:44.007 11:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.007 11:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.007 11:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.007 11:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.007 11:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.007 [2024-11-27 11:49:10.307441] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:10:44.007 [2024-11-27 11:49:10.307984] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.266 [2024-11-27 11:49:10.488163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.266 [2024-11-27 11:49:10.615570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.524 [2024-11-27 11:49:10.826387] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.524 [2024-11-27 11:49:10.826440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.091 [2024-11-27 11:49:11.184310] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.091 [2024-11-27 11:49:11.184367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.091 [2024-11-27 11:49:11.184379] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.091 [2024-11-27 11:49:11.184390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.091 [2024-11-27 11:49:11.184398] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.091 [2024-11-27 11:49:11.184408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.091 [2024-11-27 11:49:11.184415] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:45.091 [2024-11-27 11:49:11.184424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.091 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.091 "name": "Existed_Raid", 00:10:45.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.091 "strip_size_kb": 64, 00:10:45.091 "state": "configuring", 00:10:45.091 "raid_level": "concat", 00:10:45.091 "superblock": false, 00:10:45.091 "num_base_bdevs": 4, 00:10:45.091 "num_base_bdevs_discovered": 0, 00:10:45.091 "num_base_bdevs_operational": 4, 00:10:45.092 "base_bdevs_list": [ 00:10:45.092 { 00:10:45.092 "name": "BaseBdev1", 00:10:45.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.092 "is_configured": false, 00:10:45.092 "data_offset": 0, 00:10:45.092 "data_size": 0 00:10:45.092 }, 00:10:45.092 { 00:10:45.092 "name": "BaseBdev2", 00:10:45.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.092 "is_configured": false, 00:10:45.092 "data_offset": 0, 00:10:45.092 "data_size": 0 00:10:45.092 }, 00:10:45.092 { 00:10:45.092 "name": "BaseBdev3", 00:10:45.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.092 "is_configured": false, 00:10:45.092 "data_offset": 0, 00:10:45.092 "data_size": 0 00:10:45.092 }, 00:10:45.092 { 00:10:45.092 "name": "BaseBdev4", 00:10:45.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.092 "is_configured": false, 00:10:45.092 "data_offset": 0, 00:10:45.092 "data_size": 0 00:10:45.092 } 00:10:45.092 ] 00:10:45.092 }' 00:10:45.092 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.092 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.350 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:45.350 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.350 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.350 [2024-11-27 11:49:11.611556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.350 [2024-11-27 11:49:11.611608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:45.350 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.350 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.350 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.350 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.350 [2024-11-27 11:49:11.623527] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.350 [2024-11-27 11:49:11.623575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.350 [2024-11-27 11:49:11.623586] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.350 [2024-11-27 11:49:11.623597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.350 [2024-11-27 11:49:11.623604] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.350 [2024-11-27 11:49:11.623613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.350 [2024-11-27 11:49:11.623620] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:45.351 [2024-11-27 11:49:11.623629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.351 [2024-11-27 11:49:11.672166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.351 BaseBdev1 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.351 [ 00:10:45.351 { 00:10:45.351 "name": "BaseBdev1", 00:10:45.351 "aliases": [ 00:10:45.351 "0ee2806f-6b40-49f5-9e91-dd005eec26ac" 00:10:45.351 ], 00:10:45.351 "product_name": "Malloc disk", 00:10:45.351 "block_size": 512, 00:10:45.351 "num_blocks": 65536, 00:10:45.351 "uuid": "0ee2806f-6b40-49f5-9e91-dd005eec26ac", 00:10:45.351 "assigned_rate_limits": { 00:10:45.351 "rw_ios_per_sec": 0, 00:10:45.351 "rw_mbytes_per_sec": 0, 00:10:45.351 "r_mbytes_per_sec": 0, 00:10:45.351 "w_mbytes_per_sec": 0 00:10:45.351 }, 00:10:45.351 "claimed": true, 00:10:45.351 "claim_type": "exclusive_write", 00:10:45.351 "zoned": false, 00:10:45.351 "supported_io_types": { 00:10:45.351 "read": true, 00:10:45.351 "write": true, 00:10:45.351 "unmap": true, 00:10:45.351 "flush": true, 00:10:45.351 "reset": true, 00:10:45.351 "nvme_admin": false, 00:10:45.351 "nvme_io": false, 00:10:45.351 "nvme_io_md": false, 00:10:45.351 "write_zeroes": true, 00:10:45.351 "zcopy": true, 00:10:45.351 "get_zone_info": false, 00:10:45.351 "zone_management": false, 00:10:45.351 "zone_append": false, 00:10:45.351 "compare": false, 00:10:45.351 "compare_and_write": false, 00:10:45.351 "abort": true, 00:10:45.351 "seek_hole": false, 00:10:45.351 "seek_data": false, 00:10:45.351 "copy": true, 00:10:45.351 "nvme_iov_md": false 00:10:45.351 }, 00:10:45.351 "memory_domains": [ 00:10:45.351 { 00:10:45.351 "dma_device_id": "system", 00:10:45.351 "dma_device_type": 1 00:10:45.351 }, 00:10:45.351 { 00:10:45.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.351 "dma_device_type": 2 00:10:45.351 } 00:10:45.351 ], 00:10:45.351 "driver_specific": {} 00:10:45.351 } 00:10:45.351 ] 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.351 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.610 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.610 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.610 "name": "Existed_Raid", 00:10:45.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.610 "strip_size_kb": 64, 00:10:45.610 "state": "configuring", 00:10:45.610 "raid_level": "concat", 00:10:45.610 "superblock": false, 00:10:45.610 "num_base_bdevs": 4, 00:10:45.610 "num_base_bdevs_discovered": 1, 00:10:45.610 "num_base_bdevs_operational": 4, 00:10:45.610 "base_bdevs_list": [ 00:10:45.610 { 00:10:45.610 "name": "BaseBdev1", 00:10:45.610 "uuid": "0ee2806f-6b40-49f5-9e91-dd005eec26ac", 00:10:45.610 "is_configured": true, 00:10:45.610 "data_offset": 0, 00:10:45.610 "data_size": 65536 00:10:45.610 }, 00:10:45.610 { 00:10:45.610 "name": "BaseBdev2", 00:10:45.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.610 "is_configured": false, 00:10:45.610 "data_offset": 0, 00:10:45.610 "data_size": 0 00:10:45.610 }, 00:10:45.610 { 00:10:45.610 "name": "BaseBdev3", 00:10:45.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.610 "is_configured": false, 00:10:45.610 "data_offset": 0, 00:10:45.610 "data_size": 0 00:10:45.610 }, 00:10:45.610 { 00:10:45.610 "name": "BaseBdev4", 00:10:45.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.610 "is_configured": false, 00:10:45.610 "data_offset": 0, 00:10:45.610 "data_size": 0 00:10:45.610 } 00:10:45.610 ] 00:10:45.610 }' 00:10:45.610 11:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.610 11:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.869 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:45.869 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.869 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.869 [2024-11-27 11:49:12.179496] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.869 [2024-11-27 11:49:12.179559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:45.869 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.869 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.869 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.869 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.869 [2024-11-27 11:49:12.187535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.869 [2024-11-27 11:49:12.189489] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.869 [2024-11-27 11:49:12.189529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.869 [2024-11-27 11:49:12.189539] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.869 [2024-11-27 11:49:12.189550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.869 [2024-11-27 11:49:12.189556] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:45.869 [2024-11-27 11:49:12.189565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:45.869 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.869 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:45.869 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:45.869 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:45.869 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.869 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.869 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.869 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.869 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.870 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.870 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.870 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.870 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.870 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.870 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.870 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.870 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.870 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.870 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.870 "name": "Existed_Raid", 00:10:45.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.870 "strip_size_kb": 64, 00:10:45.870 "state": "configuring", 00:10:45.870 "raid_level": "concat", 00:10:45.870 "superblock": false, 00:10:45.870 "num_base_bdevs": 4, 00:10:45.870 "num_base_bdevs_discovered": 1, 00:10:45.870 "num_base_bdevs_operational": 4, 00:10:45.870 "base_bdevs_list": [ 00:10:45.870 { 00:10:45.870 "name": "BaseBdev1", 00:10:45.870 "uuid": "0ee2806f-6b40-49f5-9e91-dd005eec26ac", 00:10:45.870 "is_configured": true, 00:10:45.870 "data_offset": 0, 00:10:45.870 "data_size": 65536 00:10:45.870 }, 00:10:45.870 { 00:10:45.870 "name": "BaseBdev2", 00:10:45.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.870 "is_configured": false, 00:10:45.870 "data_offset": 0, 00:10:45.870 "data_size": 0 00:10:45.870 }, 00:10:45.870 { 00:10:45.870 "name": "BaseBdev3", 00:10:45.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.870 "is_configured": false, 00:10:45.870 "data_offset": 0, 00:10:45.870 "data_size": 0 00:10:45.870 }, 00:10:45.870 { 00:10:45.870 "name": "BaseBdev4", 00:10:45.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.870 "is_configured": false, 00:10:45.870 "data_offset": 0, 00:10:45.870 "data_size": 0 00:10:45.870 } 00:10:45.870 ] 00:10:45.870 }' 00:10:45.870 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.870 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.439 [2024-11-27 11:49:12.681559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:46.439 BaseBdev2 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.439 [ 00:10:46.439 { 00:10:46.439 "name": "BaseBdev2", 00:10:46.439 "aliases": [ 00:10:46.439 "ed0c0a58-a242-4a4c-9559-f2e863d08289" 00:10:46.439 ], 00:10:46.439 "product_name": "Malloc disk", 00:10:46.439 "block_size": 512, 00:10:46.439 "num_blocks": 65536, 00:10:46.439 "uuid": "ed0c0a58-a242-4a4c-9559-f2e863d08289", 00:10:46.439 "assigned_rate_limits": { 00:10:46.439 "rw_ios_per_sec": 0, 00:10:46.439 "rw_mbytes_per_sec": 0, 00:10:46.439 "r_mbytes_per_sec": 0, 00:10:46.439 "w_mbytes_per_sec": 0 00:10:46.439 }, 00:10:46.439 "claimed": true, 00:10:46.439 "claim_type": "exclusive_write", 00:10:46.439 "zoned": false, 00:10:46.439 "supported_io_types": { 00:10:46.439 "read": true, 00:10:46.439 "write": true, 00:10:46.439 "unmap": true, 00:10:46.439 "flush": true, 00:10:46.439 "reset": true, 00:10:46.439 "nvme_admin": false, 00:10:46.439 "nvme_io": false, 00:10:46.439 "nvme_io_md": false, 00:10:46.439 "write_zeroes": true, 00:10:46.439 "zcopy": true, 00:10:46.439 "get_zone_info": false, 00:10:46.439 "zone_management": false, 00:10:46.439 "zone_append": false, 00:10:46.439 "compare": false, 00:10:46.439 "compare_and_write": false, 00:10:46.439 "abort": true, 00:10:46.439 "seek_hole": false, 00:10:46.439 "seek_data": false, 00:10:46.439 "copy": true, 00:10:46.439 "nvme_iov_md": false 00:10:46.439 }, 00:10:46.439 "memory_domains": [ 00:10:46.439 { 00:10:46.439 "dma_device_id": "system", 00:10:46.439 "dma_device_type": 1 00:10:46.439 }, 00:10:46.439 { 00:10:46.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.439 "dma_device_type": 2 00:10:46.439 } 00:10:46.439 ], 00:10:46.439 "driver_specific": {} 00:10:46.439 } 00:10:46.439 ] 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.439 "name": "Existed_Raid", 00:10:46.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.439 "strip_size_kb": 64, 00:10:46.439 "state": "configuring", 00:10:46.439 "raid_level": "concat", 00:10:46.439 "superblock": false, 00:10:46.439 "num_base_bdevs": 4, 00:10:46.439 "num_base_bdevs_discovered": 2, 00:10:46.439 "num_base_bdevs_operational": 4, 00:10:46.439 "base_bdevs_list": [ 00:10:46.439 { 00:10:46.439 "name": "BaseBdev1", 00:10:46.439 "uuid": "0ee2806f-6b40-49f5-9e91-dd005eec26ac", 00:10:46.439 "is_configured": true, 00:10:46.439 "data_offset": 0, 00:10:46.439 "data_size": 65536 00:10:46.439 }, 00:10:46.439 { 00:10:46.439 "name": "BaseBdev2", 00:10:46.439 "uuid": "ed0c0a58-a242-4a4c-9559-f2e863d08289", 00:10:46.439 "is_configured": true, 00:10:46.439 "data_offset": 0, 00:10:46.439 "data_size": 65536 00:10:46.439 }, 00:10:46.439 { 00:10:46.439 "name": "BaseBdev3", 00:10:46.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.439 "is_configured": false, 00:10:46.439 "data_offset": 0, 00:10:46.439 "data_size": 0 00:10:46.439 }, 00:10:46.439 { 00:10:46.439 "name": "BaseBdev4", 00:10:46.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.439 "is_configured": false, 00:10:46.439 "data_offset": 0, 00:10:46.439 "data_size": 0 00:10:46.439 } 00:10:46.439 ] 00:10:46.439 }' 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.439 11:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.006 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:47.006 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.006 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.006 [2024-11-27 11:49:13.185447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.006 BaseBdev3 00:10:47.006 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.006 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:47.006 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:47.006 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.006 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:47.006 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.006 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.006 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.006 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.006 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.006 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.006 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:47.006 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.006 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.006 [ 00:10:47.006 { 00:10:47.006 "name": "BaseBdev3", 00:10:47.006 "aliases": [ 00:10:47.006 "399723bd-5ea2-4cd3-a782-eaaeddb732ac" 00:10:47.006 ], 00:10:47.006 "product_name": "Malloc disk", 00:10:47.006 "block_size": 512, 00:10:47.006 "num_blocks": 65536, 00:10:47.006 "uuid": "399723bd-5ea2-4cd3-a782-eaaeddb732ac", 00:10:47.006 "assigned_rate_limits": { 00:10:47.006 "rw_ios_per_sec": 0, 00:10:47.006 "rw_mbytes_per_sec": 0, 00:10:47.006 "r_mbytes_per_sec": 0, 00:10:47.006 "w_mbytes_per_sec": 0 00:10:47.006 }, 00:10:47.006 "claimed": true, 00:10:47.006 "claim_type": "exclusive_write", 00:10:47.006 "zoned": false, 00:10:47.006 "supported_io_types": { 00:10:47.006 "read": true, 00:10:47.006 "write": true, 00:10:47.006 "unmap": true, 00:10:47.006 "flush": true, 00:10:47.006 "reset": true, 00:10:47.006 "nvme_admin": false, 00:10:47.006 "nvme_io": false, 00:10:47.006 "nvme_io_md": false, 00:10:47.006 "write_zeroes": true, 00:10:47.006 "zcopy": true, 00:10:47.006 "get_zone_info": false, 00:10:47.006 "zone_management": false, 00:10:47.006 "zone_append": false, 00:10:47.006 "compare": false, 00:10:47.006 "compare_and_write": false, 00:10:47.006 "abort": true, 00:10:47.007 "seek_hole": false, 00:10:47.007 "seek_data": false, 00:10:47.007 "copy": true, 00:10:47.007 "nvme_iov_md": false 00:10:47.007 }, 00:10:47.007 "memory_domains": [ 00:10:47.007 { 00:10:47.007 "dma_device_id": "system", 00:10:47.007 "dma_device_type": 1 00:10:47.007 }, 00:10:47.007 { 00:10:47.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.007 "dma_device_type": 2 00:10:47.007 } 00:10:47.007 ], 00:10:47.007 "driver_specific": {} 00:10:47.007 } 00:10:47.007 ] 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.007 "name": "Existed_Raid", 00:10:47.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.007 "strip_size_kb": 64, 00:10:47.007 "state": "configuring", 00:10:47.007 "raid_level": "concat", 00:10:47.007 "superblock": false, 00:10:47.007 "num_base_bdevs": 4, 00:10:47.007 "num_base_bdevs_discovered": 3, 00:10:47.007 "num_base_bdevs_operational": 4, 00:10:47.007 "base_bdevs_list": [ 00:10:47.007 { 00:10:47.007 "name": "BaseBdev1", 00:10:47.007 "uuid": "0ee2806f-6b40-49f5-9e91-dd005eec26ac", 00:10:47.007 "is_configured": true, 00:10:47.007 "data_offset": 0, 00:10:47.007 "data_size": 65536 00:10:47.007 }, 00:10:47.007 { 00:10:47.007 "name": "BaseBdev2", 00:10:47.007 "uuid": "ed0c0a58-a242-4a4c-9559-f2e863d08289", 00:10:47.007 "is_configured": true, 00:10:47.007 "data_offset": 0, 00:10:47.007 "data_size": 65536 00:10:47.007 }, 00:10:47.007 { 00:10:47.007 "name": "BaseBdev3", 00:10:47.007 "uuid": "399723bd-5ea2-4cd3-a782-eaaeddb732ac", 00:10:47.007 "is_configured": true, 00:10:47.007 "data_offset": 0, 00:10:47.007 "data_size": 65536 00:10:47.007 }, 00:10:47.007 { 00:10:47.007 "name": "BaseBdev4", 00:10:47.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.007 "is_configured": false, 00:10:47.007 "data_offset": 0, 00:10:47.007 "data_size": 0 00:10:47.007 } 00:10:47.007 ] 00:10:47.007 }' 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.007 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.265 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:47.265 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.265 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.607 [2024-11-27 11:49:13.675265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:47.607 [2024-11-27 11:49:13.675405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:47.607 [2024-11-27 11:49:13.675438] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:47.607 [2024-11-27 11:49:13.675757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:47.607 [2024-11-27 11:49:13.675985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:47.607 [2024-11-27 11:49:13.676033] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:47.607 [2024-11-27 11:49:13.676325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.607 BaseBdev4 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.607 [ 00:10:47.607 { 00:10:47.607 "name": "BaseBdev4", 00:10:47.607 "aliases": [ 00:10:47.607 "db7efb71-89bf-4546-9a98-102dd700917b" 00:10:47.607 ], 00:10:47.607 "product_name": "Malloc disk", 00:10:47.607 "block_size": 512, 00:10:47.607 "num_blocks": 65536, 00:10:47.607 "uuid": "db7efb71-89bf-4546-9a98-102dd700917b", 00:10:47.607 "assigned_rate_limits": { 00:10:47.607 "rw_ios_per_sec": 0, 00:10:47.607 "rw_mbytes_per_sec": 0, 00:10:47.607 "r_mbytes_per_sec": 0, 00:10:47.607 "w_mbytes_per_sec": 0 00:10:47.607 }, 00:10:47.607 "claimed": true, 00:10:47.607 "claim_type": "exclusive_write", 00:10:47.607 "zoned": false, 00:10:47.607 "supported_io_types": { 00:10:47.607 "read": true, 00:10:47.607 "write": true, 00:10:47.607 "unmap": true, 00:10:47.607 "flush": true, 00:10:47.607 "reset": true, 00:10:47.607 "nvme_admin": false, 00:10:47.607 "nvme_io": false, 00:10:47.607 "nvme_io_md": false, 00:10:47.607 "write_zeroes": true, 00:10:47.607 "zcopy": true, 00:10:47.607 "get_zone_info": false, 00:10:47.607 "zone_management": false, 00:10:47.607 "zone_append": false, 00:10:47.607 "compare": false, 00:10:47.607 "compare_and_write": false, 00:10:47.607 "abort": true, 00:10:47.607 "seek_hole": false, 00:10:47.607 "seek_data": false, 00:10:47.607 "copy": true, 00:10:47.607 "nvme_iov_md": false 00:10:47.607 }, 00:10:47.607 "memory_domains": [ 00:10:47.607 { 00:10:47.607 "dma_device_id": "system", 00:10:47.607 "dma_device_type": 1 00:10:47.607 }, 00:10:47.607 { 00:10:47.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.607 "dma_device_type": 2 00:10:47.607 } 00:10:47.607 ], 00:10:47.607 "driver_specific": {} 00:10:47.607 } 00:10:47.607 ] 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.607 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.608 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.608 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.608 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.608 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.608 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.608 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.608 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.608 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.608 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.608 "name": "Existed_Raid", 00:10:47.608 "uuid": "7d7328e0-734c-45c5-8a1b-ad057a83b99c", 00:10:47.608 "strip_size_kb": 64, 00:10:47.608 "state": "online", 00:10:47.608 "raid_level": "concat", 00:10:47.608 "superblock": false, 00:10:47.608 "num_base_bdevs": 4, 00:10:47.608 "num_base_bdevs_discovered": 4, 00:10:47.608 "num_base_bdevs_operational": 4, 00:10:47.608 "base_bdevs_list": [ 00:10:47.608 { 00:10:47.608 "name": "BaseBdev1", 00:10:47.608 "uuid": "0ee2806f-6b40-49f5-9e91-dd005eec26ac", 00:10:47.608 "is_configured": true, 00:10:47.608 "data_offset": 0, 00:10:47.608 "data_size": 65536 00:10:47.608 }, 00:10:47.608 { 00:10:47.608 "name": "BaseBdev2", 00:10:47.608 "uuid": "ed0c0a58-a242-4a4c-9559-f2e863d08289", 00:10:47.608 "is_configured": true, 00:10:47.608 "data_offset": 0, 00:10:47.608 "data_size": 65536 00:10:47.608 }, 00:10:47.608 { 00:10:47.608 "name": "BaseBdev3", 00:10:47.608 "uuid": "399723bd-5ea2-4cd3-a782-eaaeddb732ac", 00:10:47.608 "is_configured": true, 00:10:47.608 "data_offset": 0, 00:10:47.608 "data_size": 65536 00:10:47.608 }, 00:10:47.608 { 00:10:47.608 "name": "BaseBdev4", 00:10:47.608 "uuid": "db7efb71-89bf-4546-9a98-102dd700917b", 00:10:47.608 "is_configured": true, 00:10:47.608 "data_offset": 0, 00:10:47.608 "data_size": 65536 00:10:47.608 } 00:10:47.608 ] 00:10:47.608 }' 00:10:47.608 11:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.608 11:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.867 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:47.867 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:47.867 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:47.867 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:47.867 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:47.867 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:47.867 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:47.867 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:47.867 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.867 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.867 [2024-11-27 11:49:14.170847] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.867 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.867 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:47.867 "name": "Existed_Raid", 00:10:47.867 "aliases": [ 00:10:47.867 "7d7328e0-734c-45c5-8a1b-ad057a83b99c" 00:10:47.867 ], 00:10:47.867 "product_name": "Raid Volume", 00:10:47.867 "block_size": 512, 00:10:47.867 "num_blocks": 262144, 00:10:47.867 "uuid": "7d7328e0-734c-45c5-8a1b-ad057a83b99c", 00:10:47.867 "assigned_rate_limits": { 00:10:47.867 "rw_ios_per_sec": 0, 00:10:47.867 "rw_mbytes_per_sec": 0, 00:10:47.867 "r_mbytes_per_sec": 0, 00:10:47.867 "w_mbytes_per_sec": 0 00:10:47.867 }, 00:10:47.867 "claimed": false, 00:10:47.867 "zoned": false, 00:10:47.867 "supported_io_types": { 00:10:47.867 "read": true, 00:10:47.867 "write": true, 00:10:47.867 "unmap": true, 00:10:47.868 "flush": true, 00:10:47.868 "reset": true, 00:10:47.868 "nvme_admin": false, 00:10:47.868 "nvme_io": false, 00:10:47.868 "nvme_io_md": false, 00:10:47.868 "write_zeroes": true, 00:10:47.868 "zcopy": false, 00:10:47.868 "get_zone_info": false, 00:10:47.868 "zone_management": false, 00:10:47.868 "zone_append": false, 00:10:47.868 "compare": false, 00:10:47.868 "compare_and_write": false, 00:10:47.868 "abort": false, 00:10:47.868 "seek_hole": false, 00:10:47.868 "seek_data": false, 00:10:47.868 "copy": false, 00:10:47.868 "nvme_iov_md": false 00:10:47.868 }, 00:10:47.868 "memory_domains": [ 00:10:47.868 { 00:10:47.868 "dma_device_id": "system", 00:10:47.868 "dma_device_type": 1 00:10:47.868 }, 00:10:47.868 { 00:10:47.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.868 "dma_device_type": 2 00:10:47.868 }, 00:10:47.868 { 00:10:47.868 "dma_device_id": "system", 00:10:47.868 "dma_device_type": 1 00:10:47.868 }, 00:10:47.868 { 00:10:47.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.868 "dma_device_type": 2 00:10:47.868 }, 00:10:47.868 { 00:10:47.868 "dma_device_id": "system", 00:10:47.868 "dma_device_type": 1 00:10:47.868 }, 00:10:47.868 { 00:10:47.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.868 "dma_device_type": 2 00:10:47.868 }, 00:10:47.868 { 00:10:47.868 "dma_device_id": "system", 00:10:47.868 "dma_device_type": 1 00:10:47.868 }, 00:10:47.868 { 00:10:47.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.868 "dma_device_type": 2 00:10:47.868 } 00:10:47.868 ], 00:10:47.868 "driver_specific": { 00:10:47.868 "raid": { 00:10:47.868 "uuid": "7d7328e0-734c-45c5-8a1b-ad057a83b99c", 00:10:47.868 "strip_size_kb": 64, 00:10:47.868 "state": "online", 00:10:47.868 "raid_level": "concat", 00:10:47.868 "superblock": false, 00:10:47.868 "num_base_bdevs": 4, 00:10:47.868 "num_base_bdevs_discovered": 4, 00:10:47.868 "num_base_bdevs_operational": 4, 00:10:47.868 "base_bdevs_list": [ 00:10:47.868 { 00:10:47.868 "name": "BaseBdev1", 00:10:47.868 "uuid": "0ee2806f-6b40-49f5-9e91-dd005eec26ac", 00:10:47.868 "is_configured": true, 00:10:47.868 "data_offset": 0, 00:10:47.868 "data_size": 65536 00:10:47.868 }, 00:10:47.868 { 00:10:47.868 "name": "BaseBdev2", 00:10:47.868 "uuid": "ed0c0a58-a242-4a4c-9559-f2e863d08289", 00:10:47.868 "is_configured": true, 00:10:47.868 "data_offset": 0, 00:10:47.868 "data_size": 65536 00:10:47.868 }, 00:10:47.868 { 00:10:47.868 "name": "BaseBdev3", 00:10:47.868 "uuid": "399723bd-5ea2-4cd3-a782-eaaeddb732ac", 00:10:47.868 "is_configured": true, 00:10:47.868 "data_offset": 0, 00:10:47.868 "data_size": 65536 00:10:47.868 }, 00:10:47.868 { 00:10:47.868 "name": "BaseBdev4", 00:10:47.868 "uuid": "db7efb71-89bf-4546-9a98-102dd700917b", 00:10:47.868 "is_configured": true, 00:10:47.868 "data_offset": 0, 00:10:47.868 "data_size": 65536 00:10:47.868 } 00:10:47.868 ] 00:10:47.868 } 00:10:47.868 } 00:10:47.868 }' 00:10:47.868 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:48.127 BaseBdev2 00:10:48.127 BaseBdev3 00:10:48.127 BaseBdev4' 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.127 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.386 [2024-11-27 11:49:14.533967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:48.386 [2024-11-27 11:49:14.534061] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.386 [2024-11-27 11:49:14.534140] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.386 "name": "Existed_Raid", 00:10:48.386 "uuid": "7d7328e0-734c-45c5-8a1b-ad057a83b99c", 00:10:48.386 "strip_size_kb": 64, 00:10:48.386 "state": "offline", 00:10:48.386 "raid_level": "concat", 00:10:48.386 "superblock": false, 00:10:48.386 "num_base_bdevs": 4, 00:10:48.386 "num_base_bdevs_discovered": 3, 00:10:48.386 "num_base_bdevs_operational": 3, 00:10:48.386 "base_bdevs_list": [ 00:10:48.386 { 00:10:48.386 "name": null, 00:10:48.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.386 "is_configured": false, 00:10:48.386 "data_offset": 0, 00:10:48.386 "data_size": 65536 00:10:48.386 }, 00:10:48.386 { 00:10:48.386 "name": "BaseBdev2", 00:10:48.386 "uuid": "ed0c0a58-a242-4a4c-9559-f2e863d08289", 00:10:48.386 "is_configured": true, 00:10:48.386 "data_offset": 0, 00:10:48.386 "data_size": 65536 00:10:48.386 }, 00:10:48.386 { 00:10:48.386 "name": "BaseBdev3", 00:10:48.386 "uuid": "399723bd-5ea2-4cd3-a782-eaaeddb732ac", 00:10:48.386 "is_configured": true, 00:10:48.386 "data_offset": 0, 00:10:48.386 "data_size": 65536 00:10:48.386 }, 00:10:48.386 { 00:10:48.386 "name": "BaseBdev4", 00:10:48.386 "uuid": "db7efb71-89bf-4546-9a98-102dd700917b", 00:10:48.386 "is_configured": true, 00:10:48.386 "data_offset": 0, 00:10:48.386 "data_size": 65536 00:10:48.386 } 00:10:48.386 ] 00:10:48.386 }' 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.386 11:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.954 [2024-11-27 11:49:15.180018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.954 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.213 [2024-11-27 11:49:15.339371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:49.213 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.213 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:49.213 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:49.213 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.213 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:49.213 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.213 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.213 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.213 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:49.213 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:49.213 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:49.213 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.213 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.213 [2024-11-27 11:49:15.493782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:49.213 [2024-11-27 11:49:15.493855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:49.213 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.213 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:49.213 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:49.472 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.472 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:49.472 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.472 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.473 BaseBdev2 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.473 [ 00:10:49.473 { 00:10:49.473 "name": "BaseBdev2", 00:10:49.473 "aliases": [ 00:10:49.473 "9f98d01b-8999-47a0-af43-3d471971155d" 00:10:49.473 ], 00:10:49.473 "product_name": "Malloc disk", 00:10:49.473 "block_size": 512, 00:10:49.473 "num_blocks": 65536, 00:10:49.473 "uuid": "9f98d01b-8999-47a0-af43-3d471971155d", 00:10:49.473 "assigned_rate_limits": { 00:10:49.473 "rw_ios_per_sec": 0, 00:10:49.473 "rw_mbytes_per_sec": 0, 00:10:49.473 "r_mbytes_per_sec": 0, 00:10:49.473 "w_mbytes_per_sec": 0 00:10:49.473 }, 00:10:49.473 "claimed": false, 00:10:49.473 "zoned": false, 00:10:49.473 "supported_io_types": { 00:10:49.473 "read": true, 00:10:49.473 "write": true, 00:10:49.473 "unmap": true, 00:10:49.473 "flush": true, 00:10:49.473 "reset": true, 00:10:49.473 "nvme_admin": false, 00:10:49.473 "nvme_io": false, 00:10:49.473 "nvme_io_md": false, 00:10:49.473 "write_zeroes": true, 00:10:49.473 "zcopy": true, 00:10:49.473 "get_zone_info": false, 00:10:49.473 "zone_management": false, 00:10:49.473 "zone_append": false, 00:10:49.473 "compare": false, 00:10:49.473 "compare_and_write": false, 00:10:49.473 "abort": true, 00:10:49.473 "seek_hole": false, 00:10:49.473 "seek_data": false, 00:10:49.473 "copy": true, 00:10:49.473 "nvme_iov_md": false 00:10:49.473 }, 00:10:49.473 "memory_domains": [ 00:10:49.473 { 00:10:49.473 "dma_device_id": "system", 00:10:49.473 "dma_device_type": 1 00:10:49.473 }, 00:10:49.473 { 00:10:49.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.473 "dma_device_type": 2 00:10:49.473 } 00:10:49.473 ], 00:10:49.473 "driver_specific": {} 00:10:49.473 } 00:10:49.473 ] 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.473 BaseBdev3 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.473 [ 00:10:49.473 { 00:10:49.473 "name": "BaseBdev3", 00:10:49.473 "aliases": [ 00:10:49.473 "cf561196-bbc8-4ecd-9c09-b6357d3f8f75" 00:10:49.473 ], 00:10:49.473 "product_name": "Malloc disk", 00:10:49.473 "block_size": 512, 00:10:49.473 "num_blocks": 65536, 00:10:49.473 "uuid": "cf561196-bbc8-4ecd-9c09-b6357d3f8f75", 00:10:49.473 "assigned_rate_limits": { 00:10:49.473 "rw_ios_per_sec": 0, 00:10:49.473 "rw_mbytes_per_sec": 0, 00:10:49.473 "r_mbytes_per_sec": 0, 00:10:49.473 "w_mbytes_per_sec": 0 00:10:49.473 }, 00:10:49.473 "claimed": false, 00:10:49.473 "zoned": false, 00:10:49.473 "supported_io_types": { 00:10:49.473 "read": true, 00:10:49.473 "write": true, 00:10:49.473 "unmap": true, 00:10:49.473 "flush": true, 00:10:49.473 "reset": true, 00:10:49.473 "nvme_admin": false, 00:10:49.473 "nvme_io": false, 00:10:49.473 "nvme_io_md": false, 00:10:49.473 "write_zeroes": true, 00:10:49.473 "zcopy": true, 00:10:49.473 "get_zone_info": false, 00:10:49.473 "zone_management": false, 00:10:49.473 "zone_append": false, 00:10:49.473 "compare": false, 00:10:49.473 "compare_and_write": false, 00:10:49.473 "abort": true, 00:10:49.473 "seek_hole": false, 00:10:49.473 "seek_data": false, 00:10:49.473 "copy": true, 00:10:49.473 "nvme_iov_md": false 00:10:49.473 }, 00:10:49.473 "memory_domains": [ 00:10:49.473 { 00:10:49.473 "dma_device_id": "system", 00:10:49.473 "dma_device_type": 1 00:10:49.473 }, 00:10:49.473 { 00:10:49.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.473 "dma_device_type": 2 00:10:49.473 } 00:10:49.473 ], 00:10:49.473 "driver_specific": {} 00:10:49.473 } 00:10:49.473 ] 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.473 BaseBdev4 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.473 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.732 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.732 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:49.732 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.732 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.732 [ 00:10:49.732 { 00:10:49.732 "name": "BaseBdev4", 00:10:49.732 "aliases": [ 00:10:49.732 "a25b81fb-4243-4150-b370-63bff26d973c" 00:10:49.732 ], 00:10:49.732 "product_name": "Malloc disk", 00:10:49.732 "block_size": 512, 00:10:49.732 "num_blocks": 65536, 00:10:49.732 "uuid": "a25b81fb-4243-4150-b370-63bff26d973c", 00:10:49.732 "assigned_rate_limits": { 00:10:49.732 "rw_ios_per_sec": 0, 00:10:49.732 "rw_mbytes_per_sec": 0, 00:10:49.732 "r_mbytes_per_sec": 0, 00:10:49.732 "w_mbytes_per_sec": 0 00:10:49.732 }, 00:10:49.732 "claimed": false, 00:10:49.732 "zoned": false, 00:10:49.732 "supported_io_types": { 00:10:49.732 "read": true, 00:10:49.732 "write": true, 00:10:49.732 "unmap": true, 00:10:49.732 "flush": true, 00:10:49.732 "reset": true, 00:10:49.732 "nvme_admin": false, 00:10:49.732 "nvme_io": false, 00:10:49.732 "nvme_io_md": false, 00:10:49.732 "write_zeroes": true, 00:10:49.733 "zcopy": true, 00:10:49.733 "get_zone_info": false, 00:10:49.733 "zone_management": false, 00:10:49.733 "zone_append": false, 00:10:49.733 "compare": false, 00:10:49.733 "compare_and_write": false, 00:10:49.733 "abort": true, 00:10:49.733 "seek_hole": false, 00:10:49.733 "seek_data": false, 00:10:49.733 "copy": true, 00:10:49.733 "nvme_iov_md": false 00:10:49.733 }, 00:10:49.733 "memory_domains": [ 00:10:49.733 { 00:10:49.733 "dma_device_id": "system", 00:10:49.733 "dma_device_type": 1 00:10:49.733 }, 00:10:49.733 { 00:10:49.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.733 "dma_device_type": 2 00:10:49.733 } 00:10:49.733 ], 00:10:49.733 "driver_specific": {} 00:10:49.733 } 00:10:49.733 ] 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.733 [2024-11-27 11:49:15.887640] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.733 [2024-11-27 11:49:15.887736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.733 [2024-11-27 11:49:15.887788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.733 [2024-11-27 11:49:15.889888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.733 [2024-11-27 11:49:15.889985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.733 "name": "Existed_Raid", 00:10:49.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.733 "strip_size_kb": 64, 00:10:49.733 "state": "configuring", 00:10:49.733 "raid_level": "concat", 00:10:49.733 "superblock": false, 00:10:49.733 "num_base_bdevs": 4, 00:10:49.733 "num_base_bdevs_discovered": 3, 00:10:49.733 "num_base_bdevs_operational": 4, 00:10:49.733 "base_bdevs_list": [ 00:10:49.733 { 00:10:49.733 "name": "BaseBdev1", 00:10:49.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.733 "is_configured": false, 00:10:49.733 "data_offset": 0, 00:10:49.733 "data_size": 0 00:10:49.733 }, 00:10:49.733 { 00:10:49.733 "name": "BaseBdev2", 00:10:49.733 "uuid": "9f98d01b-8999-47a0-af43-3d471971155d", 00:10:49.733 "is_configured": true, 00:10:49.733 "data_offset": 0, 00:10:49.733 "data_size": 65536 00:10:49.733 }, 00:10:49.733 { 00:10:49.733 "name": "BaseBdev3", 00:10:49.733 "uuid": "cf561196-bbc8-4ecd-9c09-b6357d3f8f75", 00:10:49.733 "is_configured": true, 00:10:49.733 "data_offset": 0, 00:10:49.733 "data_size": 65536 00:10:49.733 }, 00:10:49.733 { 00:10:49.733 "name": "BaseBdev4", 00:10:49.733 "uuid": "a25b81fb-4243-4150-b370-63bff26d973c", 00:10:49.733 "is_configured": true, 00:10:49.733 "data_offset": 0, 00:10:49.733 "data_size": 65536 00:10:49.733 } 00:10:49.733 ] 00:10:49.733 }' 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.733 11:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.992 [2024-11-27 11:49:16.334969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.992 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.252 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.252 "name": "Existed_Raid", 00:10:50.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.252 "strip_size_kb": 64, 00:10:50.252 "state": "configuring", 00:10:50.252 "raid_level": "concat", 00:10:50.252 "superblock": false, 00:10:50.252 "num_base_bdevs": 4, 00:10:50.252 "num_base_bdevs_discovered": 2, 00:10:50.252 "num_base_bdevs_operational": 4, 00:10:50.252 "base_bdevs_list": [ 00:10:50.252 { 00:10:50.252 "name": "BaseBdev1", 00:10:50.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.252 "is_configured": false, 00:10:50.252 "data_offset": 0, 00:10:50.252 "data_size": 0 00:10:50.252 }, 00:10:50.252 { 00:10:50.252 "name": null, 00:10:50.252 "uuid": "9f98d01b-8999-47a0-af43-3d471971155d", 00:10:50.252 "is_configured": false, 00:10:50.252 "data_offset": 0, 00:10:50.252 "data_size": 65536 00:10:50.252 }, 00:10:50.252 { 00:10:50.252 "name": "BaseBdev3", 00:10:50.252 "uuid": "cf561196-bbc8-4ecd-9c09-b6357d3f8f75", 00:10:50.252 "is_configured": true, 00:10:50.252 "data_offset": 0, 00:10:50.252 "data_size": 65536 00:10:50.252 }, 00:10:50.252 { 00:10:50.252 "name": "BaseBdev4", 00:10:50.252 "uuid": "a25b81fb-4243-4150-b370-63bff26d973c", 00:10:50.252 "is_configured": true, 00:10:50.252 "data_offset": 0, 00:10:50.252 "data_size": 65536 00:10:50.252 } 00:10:50.252 ] 00:10:50.252 }' 00:10:50.252 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.252 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.511 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.511 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.512 [2024-11-27 11:49:16.868255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.512 BaseBdev1 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.512 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.512 [ 00:10:50.512 { 00:10:50.512 "name": "BaseBdev1", 00:10:50.512 "aliases": [ 00:10:50.512 "243368c4-87c2-42b2-9832-26cc7f143b4c" 00:10:50.770 ], 00:10:50.770 "product_name": "Malloc disk", 00:10:50.770 "block_size": 512, 00:10:50.770 "num_blocks": 65536, 00:10:50.770 "uuid": "243368c4-87c2-42b2-9832-26cc7f143b4c", 00:10:50.770 "assigned_rate_limits": { 00:10:50.770 "rw_ios_per_sec": 0, 00:10:50.770 "rw_mbytes_per_sec": 0, 00:10:50.770 "r_mbytes_per_sec": 0, 00:10:50.770 "w_mbytes_per_sec": 0 00:10:50.770 }, 00:10:50.770 "claimed": true, 00:10:50.770 "claim_type": "exclusive_write", 00:10:50.770 "zoned": false, 00:10:50.770 "supported_io_types": { 00:10:50.770 "read": true, 00:10:50.770 "write": true, 00:10:50.770 "unmap": true, 00:10:50.770 "flush": true, 00:10:50.770 "reset": true, 00:10:50.770 "nvme_admin": false, 00:10:50.770 "nvme_io": false, 00:10:50.770 "nvme_io_md": false, 00:10:50.770 "write_zeroes": true, 00:10:50.770 "zcopy": true, 00:10:50.770 "get_zone_info": false, 00:10:50.770 "zone_management": false, 00:10:50.770 "zone_append": false, 00:10:50.770 "compare": false, 00:10:50.770 "compare_and_write": false, 00:10:50.770 "abort": true, 00:10:50.770 "seek_hole": false, 00:10:50.770 "seek_data": false, 00:10:50.770 "copy": true, 00:10:50.770 "nvme_iov_md": false 00:10:50.770 }, 00:10:50.770 "memory_domains": [ 00:10:50.770 { 00:10:50.770 "dma_device_id": "system", 00:10:50.770 "dma_device_type": 1 00:10:50.770 }, 00:10:50.770 { 00:10:50.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.770 "dma_device_type": 2 00:10:50.770 } 00:10:50.770 ], 00:10:50.770 "driver_specific": {} 00:10:50.770 } 00:10:50.770 ] 00:10:50.770 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.770 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:50.770 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:50.770 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.770 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.770 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.771 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.771 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.771 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.771 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.771 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.771 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.771 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.771 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.771 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.771 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.771 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.771 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.771 "name": "Existed_Raid", 00:10:50.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.771 "strip_size_kb": 64, 00:10:50.771 "state": "configuring", 00:10:50.771 "raid_level": "concat", 00:10:50.771 "superblock": false, 00:10:50.771 "num_base_bdevs": 4, 00:10:50.771 "num_base_bdevs_discovered": 3, 00:10:50.771 "num_base_bdevs_operational": 4, 00:10:50.771 "base_bdevs_list": [ 00:10:50.771 { 00:10:50.771 "name": "BaseBdev1", 00:10:50.771 "uuid": "243368c4-87c2-42b2-9832-26cc7f143b4c", 00:10:50.771 "is_configured": true, 00:10:50.771 "data_offset": 0, 00:10:50.771 "data_size": 65536 00:10:50.771 }, 00:10:50.771 { 00:10:50.771 "name": null, 00:10:50.771 "uuid": "9f98d01b-8999-47a0-af43-3d471971155d", 00:10:50.771 "is_configured": false, 00:10:50.771 "data_offset": 0, 00:10:50.771 "data_size": 65536 00:10:50.771 }, 00:10:50.771 { 00:10:50.771 "name": "BaseBdev3", 00:10:50.771 "uuid": "cf561196-bbc8-4ecd-9c09-b6357d3f8f75", 00:10:50.771 "is_configured": true, 00:10:50.771 "data_offset": 0, 00:10:50.771 "data_size": 65536 00:10:50.771 }, 00:10:50.771 { 00:10:50.771 "name": "BaseBdev4", 00:10:50.771 "uuid": "a25b81fb-4243-4150-b370-63bff26d973c", 00:10:50.771 "is_configured": true, 00:10:50.771 "data_offset": 0, 00:10:50.771 "data_size": 65536 00:10:50.771 } 00:10:50.771 ] 00:10:50.771 }' 00:10:50.771 11:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.771 11:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.029 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:51.029 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.029 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.029 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.029 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.288 [2024-11-27 11:49:17.435404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.288 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.288 "name": "Existed_Raid", 00:10:51.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.288 "strip_size_kb": 64, 00:10:51.288 "state": "configuring", 00:10:51.288 "raid_level": "concat", 00:10:51.288 "superblock": false, 00:10:51.288 "num_base_bdevs": 4, 00:10:51.288 "num_base_bdevs_discovered": 2, 00:10:51.288 "num_base_bdevs_operational": 4, 00:10:51.288 "base_bdevs_list": [ 00:10:51.288 { 00:10:51.288 "name": "BaseBdev1", 00:10:51.288 "uuid": "243368c4-87c2-42b2-9832-26cc7f143b4c", 00:10:51.288 "is_configured": true, 00:10:51.289 "data_offset": 0, 00:10:51.289 "data_size": 65536 00:10:51.289 }, 00:10:51.289 { 00:10:51.289 "name": null, 00:10:51.289 "uuid": "9f98d01b-8999-47a0-af43-3d471971155d", 00:10:51.289 "is_configured": false, 00:10:51.289 "data_offset": 0, 00:10:51.289 "data_size": 65536 00:10:51.289 }, 00:10:51.289 { 00:10:51.289 "name": null, 00:10:51.289 "uuid": "cf561196-bbc8-4ecd-9c09-b6357d3f8f75", 00:10:51.289 "is_configured": false, 00:10:51.289 "data_offset": 0, 00:10:51.289 "data_size": 65536 00:10:51.289 }, 00:10:51.289 { 00:10:51.289 "name": "BaseBdev4", 00:10:51.289 "uuid": "a25b81fb-4243-4150-b370-63bff26d973c", 00:10:51.289 "is_configured": true, 00:10:51.289 "data_offset": 0, 00:10:51.289 "data_size": 65536 00:10:51.289 } 00:10:51.289 ] 00:10:51.289 }' 00:10:51.289 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.289 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.547 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.547 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:51.547 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.547 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.547 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.547 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:51.547 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:51.547 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.547 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.547 [2024-11-27 11:49:17.926583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.806 "name": "Existed_Raid", 00:10:51.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.806 "strip_size_kb": 64, 00:10:51.806 "state": "configuring", 00:10:51.806 "raid_level": "concat", 00:10:51.806 "superblock": false, 00:10:51.806 "num_base_bdevs": 4, 00:10:51.806 "num_base_bdevs_discovered": 3, 00:10:51.806 "num_base_bdevs_operational": 4, 00:10:51.806 "base_bdevs_list": [ 00:10:51.806 { 00:10:51.806 "name": "BaseBdev1", 00:10:51.806 "uuid": "243368c4-87c2-42b2-9832-26cc7f143b4c", 00:10:51.806 "is_configured": true, 00:10:51.806 "data_offset": 0, 00:10:51.806 "data_size": 65536 00:10:51.806 }, 00:10:51.806 { 00:10:51.806 "name": null, 00:10:51.806 "uuid": "9f98d01b-8999-47a0-af43-3d471971155d", 00:10:51.806 "is_configured": false, 00:10:51.806 "data_offset": 0, 00:10:51.806 "data_size": 65536 00:10:51.806 }, 00:10:51.806 { 00:10:51.806 "name": "BaseBdev3", 00:10:51.806 "uuid": "cf561196-bbc8-4ecd-9c09-b6357d3f8f75", 00:10:51.806 "is_configured": true, 00:10:51.806 "data_offset": 0, 00:10:51.806 "data_size": 65536 00:10:51.806 }, 00:10:51.806 { 00:10:51.806 "name": "BaseBdev4", 00:10:51.806 "uuid": "a25b81fb-4243-4150-b370-63bff26d973c", 00:10:51.806 "is_configured": true, 00:10:51.806 "data_offset": 0, 00:10:51.806 "data_size": 65536 00:10:51.806 } 00:10:51.806 ] 00:10:51.806 }' 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.806 11:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.065 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.065 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:52.065 11:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.065 11:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.065 11:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.065 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:52.065 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:52.065 11:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.065 11:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.065 [2024-11-27 11:49:18.417726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.340 "name": "Existed_Raid", 00:10:52.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.340 "strip_size_kb": 64, 00:10:52.340 "state": "configuring", 00:10:52.340 "raid_level": "concat", 00:10:52.340 "superblock": false, 00:10:52.340 "num_base_bdevs": 4, 00:10:52.340 "num_base_bdevs_discovered": 2, 00:10:52.340 "num_base_bdevs_operational": 4, 00:10:52.340 "base_bdevs_list": [ 00:10:52.340 { 00:10:52.340 "name": null, 00:10:52.340 "uuid": "243368c4-87c2-42b2-9832-26cc7f143b4c", 00:10:52.340 "is_configured": false, 00:10:52.340 "data_offset": 0, 00:10:52.340 "data_size": 65536 00:10:52.340 }, 00:10:52.340 { 00:10:52.340 "name": null, 00:10:52.340 "uuid": "9f98d01b-8999-47a0-af43-3d471971155d", 00:10:52.340 "is_configured": false, 00:10:52.340 "data_offset": 0, 00:10:52.340 "data_size": 65536 00:10:52.340 }, 00:10:52.340 { 00:10:52.340 "name": "BaseBdev3", 00:10:52.340 "uuid": "cf561196-bbc8-4ecd-9c09-b6357d3f8f75", 00:10:52.340 "is_configured": true, 00:10:52.340 "data_offset": 0, 00:10:52.340 "data_size": 65536 00:10:52.340 }, 00:10:52.340 { 00:10:52.340 "name": "BaseBdev4", 00:10:52.340 "uuid": "a25b81fb-4243-4150-b370-63bff26d973c", 00:10:52.340 "is_configured": true, 00:10:52.340 "data_offset": 0, 00:10:52.340 "data_size": 65536 00:10:52.340 } 00:10:52.340 ] 00:10:52.340 }' 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.340 11:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.623 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:52.623 11:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.623 11:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.623 11:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.882 [2024-11-27 11:49:19.017614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.882 "name": "Existed_Raid", 00:10:52.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.882 "strip_size_kb": 64, 00:10:52.882 "state": "configuring", 00:10:52.882 "raid_level": "concat", 00:10:52.882 "superblock": false, 00:10:52.882 "num_base_bdevs": 4, 00:10:52.882 "num_base_bdevs_discovered": 3, 00:10:52.882 "num_base_bdevs_operational": 4, 00:10:52.882 "base_bdevs_list": [ 00:10:52.882 { 00:10:52.882 "name": null, 00:10:52.882 "uuid": "243368c4-87c2-42b2-9832-26cc7f143b4c", 00:10:52.882 "is_configured": false, 00:10:52.882 "data_offset": 0, 00:10:52.882 "data_size": 65536 00:10:52.882 }, 00:10:52.882 { 00:10:52.882 "name": "BaseBdev2", 00:10:52.882 "uuid": "9f98d01b-8999-47a0-af43-3d471971155d", 00:10:52.882 "is_configured": true, 00:10:52.882 "data_offset": 0, 00:10:52.882 "data_size": 65536 00:10:52.882 }, 00:10:52.882 { 00:10:52.882 "name": "BaseBdev3", 00:10:52.882 "uuid": "cf561196-bbc8-4ecd-9c09-b6357d3f8f75", 00:10:52.882 "is_configured": true, 00:10:52.882 "data_offset": 0, 00:10:52.882 "data_size": 65536 00:10:52.882 }, 00:10:52.882 { 00:10:52.882 "name": "BaseBdev4", 00:10:52.882 "uuid": "a25b81fb-4243-4150-b370-63bff26d973c", 00:10:52.882 "is_configured": true, 00:10:52.882 "data_offset": 0, 00:10:52.882 "data_size": 65536 00:10:52.882 } 00:10:52.882 ] 00:10:52.882 }' 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.882 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.140 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.140 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.140 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.140 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:53.140 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.399 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:53.399 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.399 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:53.399 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.399 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.399 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.399 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 243368c4-87c2-42b2-9832-26cc7f143b4c 00:10:53.399 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.399 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.399 [2024-11-27 11:49:19.613661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:53.399 [2024-11-27 11:49:19.613714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:53.399 [2024-11-27 11:49:19.613722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:53.399 [2024-11-27 11:49:19.614004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:53.400 [2024-11-27 11:49:19.614153] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:53.400 [2024-11-27 11:49:19.614164] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:53.400 [2024-11-27 11:49:19.614415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.400 NewBaseBdev 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.400 [ 00:10:53.400 { 00:10:53.400 "name": "NewBaseBdev", 00:10:53.400 "aliases": [ 00:10:53.400 "243368c4-87c2-42b2-9832-26cc7f143b4c" 00:10:53.400 ], 00:10:53.400 "product_name": "Malloc disk", 00:10:53.400 "block_size": 512, 00:10:53.400 "num_blocks": 65536, 00:10:53.400 "uuid": "243368c4-87c2-42b2-9832-26cc7f143b4c", 00:10:53.400 "assigned_rate_limits": { 00:10:53.400 "rw_ios_per_sec": 0, 00:10:53.400 "rw_mbytes_per_sec": 0, 00:10:53.400 "r_mbytes_per_sec": 0, 00:10:53.400 "w_mbytes_per_sec": 0 00:10:53.400 }, 00:10:53.400 "claimed": true, 00:10:53.400 "claim_type": "exclusive_write", 00:10:53.400 "zoned": false, 00:10:53.400 "supported_io_types": { 00:10:53.400 "read": true, 00:10:53.400 "write": true, 00:10:53.400 "unmap": true, 00:10:53.400 "flush": true, 00:10:53.400 "reset": true, 00:10:53.400 "nvme_admin": false, 00:10:53.400 "nvme_io": false, 00:10:53.400 "nvme_io_md": false, 00:10:53.400 "write_zeroes": true, 00:10:53.400 "zcopy": true, 00:10:53.400 "get_zone_info": false, 00:10:53.400 "zone_management": false, 00:10:53.400 "zone_append": false, 00:10:53.400 "compare": false, 00:10:53.400 "compare_and_write": false, 00:10:53.400 "abort": true, 00:10:53.400 "seek_hole": false, 00:10:53.400 "seek_data": false, 00:10:53.400 "copy": true, 00:10:53.400 "nvme_iov_md": false 00:10:53.400 }, 00:10:53.400 "memory_domains": [ 00:10:53.400 { 00:10:53.400 "dma_device_id": "system", 00:10:53.400 "dma_device_type": 1 00:10:53.400 }, 00:10:53.400 { 00:10:53.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.400 "dma_device_type": 2 00:10:53.400 } 00:10:53.400 ], 00:10:53.400 "driver_specific": {} 00:10:53.400 } 00:10:53.400 ] 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.400 "name": "Existed_Raid", 00:10:53.400 "uuid": "beec178c-ca0c-41a0-b712-6fa35731f9b2", 00:10:53.400 "strip_size_kb": 64, 00:10:53.400 "state": "online", 00:10:53.400 "raid_level": "concat", 00:10:53.400 "superblock": false, 00:10:53.400 "num_base_bdevs": 4, 00:10:53.400 "num_base_bdevs_discovered": 4, 00:10:53.400 "num_base_bdevs_operational": 4, 00:10:53.400 "base_bdevs_list": [ 00:10:53.400 { 00:10:53.400 "name": "NewBaseBdev", 00:10:53.400 "uuid": "243368c4-87c2-42b2-9832-26cc7f143b4c", 00:10:53.400 "is_configured": true, 00:10:53.400 "data_offset": 0, 00:10:53.400 "data_size": 65536 00:10:53.400 }, 00:10:53.400 { 00:10:53.400 "name": "BaseBdev2", 00:10:53.400 "uuid": "9f98d01b-8999-47a0-af43-3d471971155d", 00:10:53.400 "is_configured": true, 00:10:53.400 "data_offset": 0, 00:10:53.400 "data_size": 65536 00:10:53.400 }, 00:10:53.400 { 00:10:53.400 "name": "BaseBdev3", 00:10:53.400 "uuid": "cf561196-bbc8-4ecd-9c09-b6357d3f8f75", 00:10:53.400 "is_configured": true, 00:10:53.400 "data_offset": 0, 00:10:53.400 "data_size": 65536 00:10:53.400 }, 00:10:53.400 { 00:10:53.400 "name": "BaseBdev4", 00:10:53.400 "uuid": "a25b81fb-4243-4150-b370-63bff26d973c", 00:10:53.400 "is_configured": true, 00:10:53.400 "data_offset": 0, 00:10:53.400 "data_size": 65536 00:10:53.400 } 00:10:53.400 ] 00:10:53.400 }' 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.400 11:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.968 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:53.968 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:53.968 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:53.968 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:53.968 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:53.968 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:53.968 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:53.968 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:53.968 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.968 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.968 [2024-11-27 11:49:20.077343] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.968 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.968 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:53.968 "name": "Existed_Raid", 00:10:53.968 "aliases": [ 00:10:53.968 "beec178c-ca0c-41a0-b712-6fa35731f9b2" 00:10:53.968 ], 00:10:53.968 "product_name": "Raid Volume", 00:10:53.968 "block_size": 512, 00:10:53.968 "num_blocks": 262144, 00:10:53.968 "uuid": "beec178c-ca0c-41a0-b712-6fa35731f9b2", 00:10:53.968 "assigned_rate_limits": { 00:10:53.968 "rw_ios_per_sec": 0, 00:10:53.968 "rw_mbytes_per_sec": 0, 00:10:53.968 "r_mbytes_per_sec": 0, 00:10:53.968 "w_mbytes_per_sec": 0 00:10:53.968 }, 00:10:53.968 "claimed": false, 00:10:53.968 "zoned": false, 00:10:53.968 "supported_io_types": { 00:10:53.968 "read": true, 00:10:53.968 "write": true, 00:10:53.968 "unmap": true, 00:10:53.968 "flush": true, 00:10:53.968 "reset": true, 00:10:53.968 "nvme_admin": false, 00:10:53.968 "nvme_io": false, 00:10:53.968 "nvme_io_md": false, 00:10:53.968 "write_zeroes": true, 00:10:53.968 "zcopy": false, 00:10:53.968 "get_zone_info": false, 00:10:53.968 "zone_management": false, 00:10:53.968 "zone_append": false, 00:10:53.968 "compare": false, 00:10:53.968 "compare_and_write": false, 00:10:53.968 "abort": false, 00:10:53.968 "seek_hole": false, 00:10:53.968 "seek_data": false, 00:10:53.968 "copy": false, 00:10:53.968 "nvme_iov_md": false 00:10:53.968 }, 00:10:53.968 "memory_domains": [ 00:10:53.968 { 00:10:53.968 "dma_device_id": "system", 00:10:53.968 "dma_device_type": 1 00:10:53.968 }, 00:10:53.968 { 00:10:53.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.968 "dma_device_type": 2 00:10:53.968 }, 00:10:53.968 { 00:10:53.968 "dma_device_id": "system", 00:10:53.968 "dma_device_type": 1 00:10:53.968 }, 00:10:53.968 { 00:10:53.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.968 "dma_device_type": 2 00:10:53.968 }, 00:10:53.968 { 00:10:53.968 "dma_device_id": "system", 00:10:53.968 "dma_device_type": 1 00:10:53.968 }, 00:10:53.968 { 00:10:53.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.968 "dma_device_type": 2 00:10:53.968 }, 00:10:53.968 { 00:10:53.968 "dma_device_id": "system", 00:10:53.968 "dma_device_type": 1 00:10:53.968 }, 00:10:53.968 { 00:10:53.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.968 "dma_device_type": 2 00:10:53.968 } 00:10:53.968 ], 00:10:53.968 "driver_specific": { 00:10:53.968 "raid": { 00:10:53.968 "uuid": "beec178c-ca0c-41a0-b712-6fa35731f9b2", 00:10:53.968 "strip_size_kb": 64, 00:10:53.968 "state": "online", 00:10:53.968 "raid_level": "concat", 00:10:53.968 "superblock": false, 00:10:53.968 "num_base_bdevs": 4, 00:10:53.968 "num_base_bdevs_discovered": 4, 00:10:53.968 "num_base_bdevs_operational": 4, 00:10:53.968 "base_bdevs_list": [ 00:10:53.968 { 00:10:53.968 "name": "NewBaseBdev", 00:10:53.968 "uuid": "243368c4-87c2-42b2-9832-26cc7f143b4c", 00:10:53.968 "is_configured": true, 00:10:53.968 "data_offset": 0, 00:10:53.968 "data_size": 65536 00:10:53.968 }, 00:10:53.968 { 00:10:53.968 "name": "BaseBdev2", 00:10:53.968 "uuid": "9f98d01b-8999-47a0-af43-3d471971155d", 00:10:53.968 "is_configured": true, 00:10:53.968 "data_offset": 0, 00:10:53.968 "data_size": 65536 00:10:53.968 }, 00:10:53.968 { 00:10:53.968 "name": "BaseBdev3", 00:10:53.968 "uuid": "cf561196-bbc8-4ecd-9c09-b6357d3f8f75", 00:10:53.969 "is_configured": true, 00:10:53.969 "data_offset": 0, 00:10:53.969 "data_size": 65536 00:10:53.969 }, 00:10:53.969 { 00:10:53.969 "name": "BaseBdev4", 00:10:53.969 "uuid": "a25b81fb-4243-4150-b370-63bff26d973c", 00:10:53.969 "is_configured": true, 00:10:53.969 "data_offset": 0, 00:10:53.969 "data_size": 65536 00:10:53.969 } 00:10:53.969 ] 00:10:53.969 } 00:10:53.969 } 00:10:53.969 }' 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:53.969 BaseBdev2 00:10:53.969 BaseBdev3 00:10:53.969 BaseBdev4' 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.969 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.228 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.228 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.228 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.228 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.228 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.228 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.228 [2024-11-27 11:49:20.376434] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.228 [2024-11-27 11:49:20.376517] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.228 [2024-11-27 11:49:20.376635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.228 [2024-11-27 11:49:20.376744] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.228 [2024-11-27 11:49:20.376796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:54.228 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.228 11:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71281 00:10:54.228 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71281 ']' 00:10:54.228 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71281 00:10:54.228 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:54.228 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.228 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71281 00:10:54.228 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:54.228 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:54.228 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71281' 00:10:54.228 killing process with pid 71281 00:10:54.228 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71281 00:10:54.228 [2024-11-27 11:49:20.421783] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:54.228 11:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71281 00:10:54.491 [2024-11-27 11:49:20.830897] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:55.866 ************************************ 00:10:55.866 END TEST raid_state_function_test 00:10:55.866 ************************************ 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:55.866 00:10:55.866 real 0m11.803s 00:10:55.866 user 0m18.820s 00:10:55.866 sys 0m1.996s 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.866 11:49:22 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:55.866 11:49:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:55.866 11:49:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.866 11:49:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.866 ************************************ 00:10:55.866 START TEST raid_state_function_test_sb 00:10:55.866 ************************************ 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:55.866 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:55.867 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:55.867 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:55.867 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:55.867 Process raid pid: 71955 00:10:55.867 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71955 00:10:55.867 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:55.867 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71955' 00:10:55.867 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71955 00:10:55.867 11:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71955 ']' 00:10:55.867 11:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.867 11:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.867 11:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.867 11:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.867 11:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.867 [2024-11-27 11:49:22.165023] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:10:55.867 [2024-11-27 11:49:22.165218] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.125 [2024-11-27 11:49:22.325294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.125 [2024-11-27 11:49:22.442250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.418 [2024-11-27 11:49:22.637536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.418 [2024-11-27 11:49:22.637582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.680 11:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.680 11:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:56.680 11:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:56.680 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.680 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.680 [2024-11-27 11:49:23.007134] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:56.680 [2024-11-27 11:49:23.007190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:56.680 [2024-11-27 11:49:23.007201] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:56.680 [2024-11-27 11:49:23.007212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:56.680 [2024-11-27 11:49:23.007223] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:56.681 [2024-11-27 11:49:23.007232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:56.681 [2024-11-27 11:49:23.007238] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:56.681 [2024-11-27 11:49:23.007247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:56.681 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.681 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:56.681 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.681 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.681 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.681 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.681 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.681 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.681 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.681 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.681 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.681 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.681 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.681 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.681 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.681 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.939 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.939 "name": "Existed_Raid", 00:10:56.939 "uuid": "fd1b985d-80ed-43c7-843f-657c5416e4a1", 00:10:56.939 "strip_size_kb": 64, 00:10:56.939 "state": "configuring", 00:10:56.939 "raid_level": "concat", 00:10:56.939 "superblock": true, 00:10:56.939 "num_base_bdevs": 4, 00:10:56.939 "num_base_bdevs_discovered": 0, 00:10:56.939 "num_base_bdevs_operational": 4, 00:10:56.939 "base_bdevs_list": [ 00:10:56.939 { 00:10:56.939 "name": "BaseBdev1", 00:10:56.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.939 "is_configured": false, 00:10:56.939 "data_offset": 0, 00:10:56.939 "data_size": 0 00:10:56.939 }, 00:10:56.939 { 00:10:56.939 "name": "BaseBdev2", 00:10:56.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.939 "is_configured": false, 00:10:56.939 "data_offset": 0, 00:10:56.939 "data_size": 0 00:10:56.939 }, 00:10:56.939 { 00:10:56.939 "name": "BaseBdev3", 00:10:56.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.939 "is_configured": false, 00:10:56.939 "data_offset": 0, 00:10:56.939 "data_size": 0 00:10:56.939 }, 00:10:56.939 { 00:10:56.939 "name": "BaseBdev4", 00:10:56.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.939 "is_configured": false, 00:10:56.939 "data_offset": 0, 00:10:56.940 "data_size": 0 00:10:56.940 } 00:10:56.940 ] 00:10:56.940 }' 00:10:56.940 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.940 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.198 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:57.198 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.198 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.198 [2024-11-27 11:49:23.494224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:57.198 [2024-11-27 11:49:23.494330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:57.198 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.198 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:57.198 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.198 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.198 [2024-11-27 11:49:23.506198] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.198 [2024-11-27 11:49:23.506279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.198 [2024-11-27 11:49:23.506309] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:57.198 [2024-11-27 11:49:23.506333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:57.198 [2024-11-27 11:49:23.506369] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:57.198 [2024-11-27 11:49:23.506415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:57.198 [2024-11-27 11:49:23.506442] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:57.198 [2024-11-27 11:49:23.506467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:57.198 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.198 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:57.198 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.199 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.199 [2024-11-27 11:49:23.556297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.199 BaseBdev1 00:10:57.199 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.199 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:57.199 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:57.199 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.199 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:57.199 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.199 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.199 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.199 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.199 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.199 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.199 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:57.199 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.199 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.457 [ 00:10:57.457 { 00:10:57.457 "name": "BaseBdev1", 00:10:57.457 "aliases": [ 00:10:57.457 "08b8a5d1-a16a-4f9c-b62b-a90a68375416" 00:10:57.457 ], 00:10:57.457 "product_name": "Malloc disk", 00:10:57.457 "block_size": 512, 00:10:57.457 "num_blocks": 65536, 00:10:57.457 "uuid": "08b8a5d1-a16a-4f9c-b62b-a90a68375416", 00:10:57.457 "assigned_rate_limits": { 00:10:57.457 "rw_ios_per_sec": 0, 00:10:57.457 "rw_mbytes_per_sec": 0, 00:10:57.457 "r_mbytes_per_sec": 0, 00:10:57.457 "w_mbytes_per_sec": 0 00:10:57.457 }, 00:10:57.457 "claimed": true, 00:10:57.457 "claim_type": "exclusive_write", 00:10:57.457 "zoned": false, 00:10:57.457 "supported_io_types": { 00:10:57.457 "read": true, 00:10:57.457 "write": true, 00:10:57.457 "unmap": true, 00:10:57.457 "flush": true, 00:10:57.457 "reset": true, 00:10:57.457 "nvme_admin": false, 00:10:57.457 "nvme_io": false, 00:10:57.457 "nvme_io_md": false, 00:10:57.457 "write_zeroes": true, 00:10:57.457 "zcopy": true, 00:10:57.457 "get_zone_info": false, 00:10:57.457 "zone_management": false, 00:10:57.457 "zone_append": false, 00:10:57.457 "compare": false, 00:10:57.457 "compare_and_write": false, 00:10:57.457 "abort": true, 00:10:57.457 "seek_hole": false, 00:10:57.457 "seek_data": false, 00:10:57.457 "copy": true, 00:10:57.457 "nvme_iov_md": false 00:10:57.457 }, 00:10:57.457 "memory_domains": [ 00:10:57.457 { 00:10:57.457 "dma_device_id": "system", 00:10:57.457 "dma_device_type": 1 00:10:57.457 }, 00:10:57.457 { 00:10:57.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.457 "dma_device_type": 2 00:10:57.457 } 00:10:57.457 ], 00:10:57.457 "driver_specific": {} 00:10:57.457 } 00:10:57.457 ] 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.457 "name": "Existed_Raid", 00:10:57.457 "uuid": "0a41ec3c-4079-455c-9fbd-f1f23c57d9fb", 00:10:57.457 "strip_size_kb": 64, 00:10:57.457 "state": "configuring", 00:10:57.457 "raid_level": "concat", 00:10:57.457 "superblock": true, 00:10:57.457 "num_base_bdevs": 4, 00:10:57.457 "num_base_bdevs_discovered": 1, 00:10:57.457 "num_base_bdevs_operational": 4, 00:10:57.457 "base_bdevs_list": [ 00:10:57.457 { 00:10:57.457 "name": "BaseBdev1", 00:10:57.457 "uuid": "08b8a5d1-a16a-4f9c-b62b-a90a68375416", 00:10:57.457 "is_configured": true, 00:10:57.457 "data_offset": 2048, 00:10:57.457 "data_size": 63488 00:10:57.457 }, 00:10:57.457 { 00:10:57.457 "name": "BaseBdev2", 00:10:57.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.457 "is_configured": false, 00:10:57.457 "data_offset": 0, 00:10:57.457 "data_size": 0 00:10:57.457 }, 00:10:57.457 { 00:10:57.457 "name": "BaseBdev3", 00:10:57.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.457 "is_configured": false, 00:10:57.457 "data_offset": 0, 00:10:57.457 "data_size": 0 00:10:57.457 }, 00:10:57.457 { 00:10:57.457 "name": "BaseBdev4", 00:10:57.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.457 "is_configured": false, 00:10:57.457 "data_offset": 0, 00:10:57.457 "data_size": 0 00:10:57.457 } 00:10:57.457 ] 00:10:57.457 }' 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.457 11:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.715 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:57.715 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.715 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.715 [2024-11-27 11:49:24.059559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:57.715 [2024-11-27 11:49:24.059702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:57.715 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.716 [2024-11-27 11:49:24.071617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.716 [2024-11-27 11:49:24.073695] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:57.716 [2024-11-27 11:49:24.073790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:57.716 [2024-11-27 11:49:24.073825] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:57.716 [2024-11-27 11:49:24.073871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:57.716 [2024-11-27 11:49:24.073895] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:57.716 [2024-11-27 11:49:24.073920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.716 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.974 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.974 "name": "Existed_Raid", 00:10:57.974 "uuid": "7b52e6a3-5fa2-4fd2-83e4-b6e8b6f9e7b9", 00:10:57.974 "strip_size_kb": 64, 00:10:57.974 "state": "configuring", 00:10:57.974 "raid_level": "concat", 00:10:57.974 "superblock": true, 00:10:57.974 "num_base_bdevs": 4, 00:10:57.974 "num_base_bdevs_discovered": 1, 00:10:57.974 "num_base_bdevs_operational": 4, 00:10:57.974 "base_bdevs_list": [ 00:10:57.974 { 00:10:57.974 "name": "BaseBdev1", 00:10:57.974 "uuid": "08b8a5d1-a16a-4f9c-b62b-a90a68375416", 00:10:57.974 "is_configured": true, 00:10:57.974 "data_offset": 2048, 00:10:57.974 "data_size": 63488 00:10:57.974 }, 00:10:57.974 { 00:10:57.974 "name": "BaseBdev2", 00:10:57.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.974 "is_configured": false, 00:10:57.974 "data_offset": 0, 00:10:57.974 "data_size": 0 00:10:57.974 }, 00:10:57.974 { 00:10:57.974 "name": "BaseBdev3", 00:10:57.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.974 "is_configured": false, 00:10:57.974 "data_offset": 0, 00:10:57.974 "data_size": 0 00:10:57.974 }, 00:10:57.974 { 00:10:57.974 "name": "BaseBdev4", 00:10:57.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.974 "is_configured": false, 00:10:57.974 "data_offset": 0, 00:10:57.974 "data_size": 0 00:10:57.974 } 00:10:57.974 ] 00:10:57.974 }' 00:10:57.974 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.974 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.233 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:58.233 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.233 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.233 [2024-11-27 11:49:24.548949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.233 BaseBdev2 00:10:58.233 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.233 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:58.233 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:58.233 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.233 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:58.233 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.233 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.233 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.233 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.234 [ 00:10:58.234 { 00:10:58.234 "name": "BaseBdev2", 00:10:58.234 "aliases": [ 00:10:58.234 "0815908c-94fa-47a6-b693-23e28dca5440" 00:10:58.234 ], 00:10:58.234 "product_name": "Malloc disk", 00:10:58.234 "block_size": 512, 00:10:58.234 "num_blocks": 65536, 00:10:58.234 "uuid": "0815908c-94fa-47a6-b693-23e28dca5440", 00:10:58.234 "assigned_rate_limits": { 00:10:58.234 "rw_ios_per_sec": 0, 00:10:58.234 "rw_mbytes_per_sec": 0, 00:10:58.234 "r_mbytes_per_sec": 0, 00:10:58.234 "w_mbytes_per_sec": 0 00:10:58.234 }, 00:10:58.234 "claimed": true, 00:10:58.234 "claim_type": "exclusive_write", 00:10:58.234 "zoned": false, 00:10:58.234 "supported_io_types": { 00:10:58.234 "read": true, 00:10:58.234 "write": true, 00:10:58.234 "unmap": true, 00:10:58.234 "flush": true, 00:10:58.234 "reset": true, 00:10:58.234 "nvme_admin": false, 00:10:58.234 "nvme_io": false, 00:10:58.234 "nvme_io_md": false, 00:10:58.234 "write_zeroes": true, 00:10:58.234 "zcopy": true, 00:10:58.234 "get_zone_info": false, 00:10:58.234 "zone_management": false, 00:10:58.234 "zone_append": false, 00:10:58.234 "compare": false, 00:10:58.234 "compare_and_write": false, 00:10:58.234 "abort": true, 00:10:58.234 "seek_hole": false, 00:10:58.234 "seek_data": false, 00:10:58.234 "copy": true, 00:10:58.234 "nvme_iov_md": false 00:10:58.234 }, 00:10:58.234 "memory_domains": [ 00:10:58.234 { 00:10:58.234 "dma_device_id": "system", 00:10:58.234 "dma_device_type": 1 00:10:58.234 }, 00:10:58.234 { 00:10:58.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.234 "dma_device_type": 2 00:10:58.234 } 00:10:58.234 ], 00:10:58.234 "driver_specific": {} 00:10:58.234 } 00:10:58.234 ] 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.234 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.493 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.493 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.493 "name": "Existed_Raid", 00:10:58.493 "uuid": "7b52e6a3-5fa2-4fd2-83e4-b6e8b6f9e7b9", 00:10:58.493 "strip_size_kb": 64, 00:10:58.493 "state": "configuring", 00:10:58.493 "raid_level": "concat", 00:10:58.493 "superblock": true, 00:10:58.493 "num_base_bdevs": 4, 00:10:58.493 "num_base_bdevs_discovered": 2, 00:10:58.493 "num_base_bdevs_operational": 4, 00:10:58.493 "base_bdevs_list": [ 00:10:58.493 { 00:10:58.493 "name": "BaseBdev1", 00:10:58.493 "uuid": "08b8a5d1-a16a-4f9c-b62b-a90a68375416", 00:10:58.493 "is_configured": true, 00:10:58.493 "data_offset": 2048, 00:10:58.493 "data_size": 63488 00:10:58.493 }, 00:10:58.493 { 00:10:58.493 "name": "BaseBdev2", 00:10:58.493 "uuid": "0815908c-94fa-47a6-b693-23e28dca5440", 00:10:58.493 "is_configured": true, 00:10:58.493 "data_offset": 2048, 00:10:58.493 "data_size": 63488 00:10:58.493 }, 00:10:58.493 { 00:10:58.493 "name": "BaseBdev3", 00:10:58.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.493 "is_configured": false, 00:10:58.493 "data_offset": 0, 00:10:58.493 "data_size": 0 00:10:58.493 }, 00:10:58.493 { 00:10:58.493 "name": "BaseBdev4", 00:10:58.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.493 "is_configured": false, 00:10:58.493 "data_offset": 0, 00:10:58.493 "data_size": 0 00:10:58.493 } 00:10:58.493 ] 00:10:58.493 }' 00:10:58.493 11:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.493 11:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.752 [2024-11-27 11:49:25.081759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.752 BaseBdev3 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.752 [ 00:10:58.752 { 00:10:58.752 "name": "BaseBdev3", 00:10:58.752 "aliases": [ 00:10:58.752 "ab7247fc-e048-41d8-8dfd-10ce809aa6c4" 00:10:58.752 ], 00:10:58.752 "product_name": "Malloc disk", 00:10:58.752 "block_size": 512, 00:10:58.752 "num_blocks": 65536, 00:10:58.752 "uuid": "ab7247fc-e048-41d8-8dfd-10ce809aa6c4", 00:10:58.752 "assigned_rate_limits": { 00:10:58.752 "rw_ios_per_sec": 0, 00:10:58.752 "rw_mbytes_per_sec": 0, 00:10:58.752 "r_mbytes_per_sec": 0, 00:10:58.752 "w_mbytes_per_sec": 0 00:10:58.752 }, 00:10:58.752 "claimed": true, 00:10:58.752 "claim_type": "exclusive_write", 00:10:58.752 "zoned": false, 00:10:58.752 "supported_io_types": { 00:10:58.752 "read": true, 00:10:58.752 "write": true, 00:10:58.752 "unmap": true, 00:10:58.752 "flush": true, 00:10:58.752 "reset": true, 00:10:58.752 "nvme_admin": false, 00:10:58.752 "nvme_io": false, 00:10:58.752 "nvme_io_md": false, 00:10:58.752 "write_zeroes": true, 00:10:58.752 "zcopy": true, 00:10:58.752 "get_zone_info": false, 00:10:58.752 "zone_management": false, 00:10:58.752 "zone_append": false, 00:10:58.752 "compare": false, 00:10:58.752 "compare_and_write": false, 00:10:58.752 "abort": true, 00:10:58.752 "seek_hole": false, 00:10:58.752 "seek_data": false, 00:10:58.752 "copy": true, 00:10:58.752 "nvme_iov_md": false 00:10:58.752 }, 00:10:58.752 "memory_domains": [ 00:10:58.752 { 00:10:58.752 "dma_device_id": "system", 00:10:58.752 "dma_device_type": 1 00:10:58.752 }, 00:10:58.752 { 00:10:58.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.752 "dma_device_type": 2 00:10:58.752 } 00:10:58.752 ], 00:10:58.752 "driver_specific": {} 00:10:58.752 } 00:10:58.752 ] 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.752 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.012 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.012 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.012 "name": "Existed_Raid", 00:10:59.012 "uuid": "7b52e6a3-5fa2-4fd2-83e4-b6e8b6f9e7b9", 00:10:59.012 "strip_size_kb": 64, 00:10:59.012 "state": "configuring", 00:10:59.012 "raid_level": "concat", 00:10:59.012 "superblock": true, 00:10:59.012 "num_base_bdevs": 4, 00:10:59.012 "num_base_bdevs_discovered": 3, 00:10:59.012 "num_base_bdevs_operational": 4, 00:10:59.012 "base_bdevs_list": [ 00:10:59.012 { 00:10:59.012 "name": "BaseBdev1", 00:10:59.012 "uuid": "08b8a5d1-a16a-4f9c-b62b-a90a68375416", 00:10:59.012 "is_configured": true, 00:10:59.012 "data_offset": 2048, 00:10:59.012 "data_size": 63488 00:10:59.012 }, 00:10:59.012 { 00:10:59.012 "name": "BaseBdev2", 00:10:59.012 "uuid": "0815908c-94fa-47a6-b693-23e28dca5440", 00:10:59.012 "is_configured": true, 00:10:59.012 "data_offset": 2048, 00:10:59.012 "data_size": 63488 00:10:59.012 }, 00:10:59.012 { 00:10:59.012 "name": "BaseBdev3", 00:10:59.012 "uuid": "ab7247fc-e048-41d8-8dfd-10ce809aa6c4", 00:10:59.012 "is_configured": true, 00:10:59.012 "data_offset": 2048, 00:10:59.012 "data_size": 63488 00:10:59.012 }, 00:10:59.012 { 00:10:59.012 "name": "BaseBdev4", 00:10:59.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.012 "is_configured": false, 00:10:59.012 "data_offset": 0, 00:10:59.012 "data_size": 0 00:10:59.012 } 00:10:59.012 ] 00:10:59.012 }' 00:10:59.012 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.012 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.271 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:59.271 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.271 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.530 [2024-11-27 11:49:25.659597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:59.530 [2024-11-27 11:49:25.659905] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:59.530 [2024-11-27 11:49:25.659924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:59.530 [2024-11-27 11:49:25.660198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:59.530 BaseBdev4 00:10:59.530 [2024-11-27 11:49:25.660385] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:59.530 [2024-11-27 11:49:25.660404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:59.530 [2024-11-27 11:49:25.660552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.530 [ 00:10:59.530 { 00:10:59.530 "name": "BaseBdev4", 00:10:59.530 "aliases": [ 00:10:59.530 "fd3b6d9c-2e21-4455-9c82-16633328daa1" 00:10:59.530 ], 00:10:59.530 "product_name": "Malloc disk", 00:10:59.530 "block_size": 512, 00:10:59.530 "num_blocks": 65536, 00:10:59.530 "uuid": "fd3b6d9c-2e21-4455-9c82-16633328daa1", 00:10:59.530 "assigned_rate_limits": { 00:10:59.530 "rw_ios_per_sec": 0, 00:10:59.530 "rw_mbytes_per_sec": 0, 00:10:59.530 "r_mbytes_per_sec": 0, 00:10:59.530 "w_mbytes_per_sec": 0 00:10:59.530 }, 00:10:59.530 "claimed": true, 00:10:59.530 "claim_type": "exclusive_write", 00:10:59.530 "zoned": false, 00:10:59.530 "supported_io_types": { 00:10:59.530 "read": true, 00:10:59.530 "write": true, 00:10:59.530 "unmap": true, 00:10:59.530 "flush": true, 00:10:59.530 "reset": true, 00:10:59.530 "nvme_admin": false, 00:10:59.530 "nvme_io": false, 00:10:59.530 "nvme_io_md": false, 00:10:59.530 "write_zeroes": true, 00:10:59.530 "zcopy": true, 00:10:59.530 "get_zone_info": false, 00:10:59.530 "zone_management": false, 00:10:59.530 "zone_append": false, 00:10:59.530 "compare": false, 00:10:59.530 "compare_and_write": false, 00:10:59.530 "abort": true, 00:10:59.530 "seek_hole": false, 00:10:59.530 "seek_data": false, 00:10:59.530 "copy": true, 00:10:59.530 "nvme_iov_md": false 00:10:59.530 }, 00:10:59.530 "memory_domains": [ 00:10:59.530 { 00:10:59.530 "dma_device_id": "system", 00:10:59.530 "dma_device_type": 1 00:10:59.530 }, 00:10:59.530 { 00:10:59.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.530 "dma_device_type": 2 00:10:59.530 } 00:10:59.530 ], 00:10:59.530 "driver_specific": {} 00:10:59.530 } 00:10:59.530 ] 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.530 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.530 "name": "Existed_Raid", 00:10:59.530 "uuid": "7b52e6a3-5fa2-4fd2-83e4-b6e8b6f9e7b9", 00:10:59.530 "strip_size_kb": 64, 00:10:59.530 "state": "online", 00:10:59.530 "raid_level": "concat", 00:10:59.530 "superblock": true, 00:10:59.530 "num_base_bdevs": 4, 00:10:59.530 "num_base_bdevs_discovered": 4, 00:10:59.530 "num_base_bdevs_operational": 4, 00:10:59.530 "base_bdevs_list": [ 00:10:59.530 { 00:10:59.530 "name": "BaseBdev1", 00:10:59.530 "uuid": "08b8a5d1-a16a-4f9c-b62b-a90a68375416", 00:10:59.530 "is_configured": true, 00:10:59.530 "data_offset": 2048, 00:10:59.530 "data_size": 63488 00:10:59.530 }, 00:10:59.530 { 00:10:59.530 "name": "BaseBdev2", 00:10:59.530 "uuid": "0815908c-94fa-47a6-b693-23e28dca5440", 00:10:59.530 "is_configured": true, 00:10:59.530 "data_offset": 2048, 00:10:59.530 "data_size": 63488 00:10:59.530 }, 00:10:59.530 { 00:10:59.530 "name": "BaseBdev3", 00:10:59.530 "uuid": "ab7247fc-e048-41d8-8dfd-10ce809aa6c4", 00:10:59.530 "is_configured": true, 00:10:59.530 "data_offset": 2048, 00:10:59.530 "data_size": 63488 00:10:59.530 }, 00:10:59.530 { 00:10:59.530 "name": "BaseBdev4", 00:10:59.530 "uuid": "fd3b6d9c-2e21-4455-9c82-16633328daa1", 00:10:59.530 "is_configured": true, 00:10:59.531 "data_offset": 2048, 00:10:59.531 "data_size": 63488 00:10:59.531 } 00:10:59.531 ] 00:10:59.531 }' 00:10:59.531 11:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.531 11:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.789 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:59.789 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:59.789 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:59.789 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:59.789 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:59.789 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:59.789 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:59.789 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:59.789 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.789 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.789 [2024-11-27 11:49:26.159150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.049 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.049 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:00.049 "name": "Existed_Raid", 00:11:00.049 "aliases": [ 00:11:00.049 "7b52e6a3-5fa2-4fd2-83e4-b6e8b6f9e7b9" 00:11:00.049 ], 00:11:00.049 "product_name": "Raid Volume", 00:11:00.049 "block_size": 512, 00:11:00.049 "num_blocks": 253952, 00:11:00.049 "uuid": "7b52e6a3-5fa2-4fd2-83e4-b6e8b6f9e7b9", 00:11:00.049 "assigned_rate_limits": { 00:11:00.049 "rw_ios_per_sec": 0, 00:11:00.049 "rw_mbytes_per_sec": 0, 00:11:00.049 "r_mbytes_per_sec": 0, 00:11:00.049 "w_mbytes_per_sec": 0 00:11:00.049 }, 00:11:00.049 "claimed": false, 00:11:00.049 "zoned": false, 00:11:00.049 "supported_io_types": { 00:11:00.049 "read": true, 00:11:00.049 "write": true, 00:11:00.049 "unmap": true, 00:11:00.049 "flush": true, 00:11:00.049 "reset": true, 00:11:00.049 "nvme_admin": false, 00:11:00.049 "nvme_io": false, 00:11:00.049 "nvme_io_md": false, 00:11:00.049 "write_zeroes": true, 00:11:00.049 "zcopy": false, 00:11:00.049 "get_zone_info": false, 00:11:00.049 "zone_management": false, 00:11:00.049 "zone_append": false, 00:11:00.049 "compare": false, 00:11:00.049 "compare_and_write": false, 00:11:00.049 "abort": false, 00:11:00.049 "seek_hole": false, 00:11:00.049 "seek_data": false, 00:11:00.049 "copy": false, 00:11:00.049 "nvme_iov_md": false 00:11:00.049 }, 00:11:00.049 "memory_domains": [ 00:11:00.049 { 00:11:00.049 "dma_device_id": "system", 00:11:00.049 "dma_device_type": 1 00:11:00.049 }, 00:11:00.049 { 00:11:00.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.049 "dma_device_type": 2 00:11:00.049 }, 00:11:00.049 { 00:11:00.049 "dma_device_id": "system", 00:11:00.049 "dma_device_type": 1 00:11:00.049 }, 00:11:00.049 { 00:11:00.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.049 "dma_device_type": 2 00:11:00.049 }, 00:11:00.049 { 00:11:00.049 "dma_device_id": "system", 00:11:00.049 "dma_device_type": 1 00:11:00.049 }, 00:11:00.049 { 00:11:00.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.049 "dma_device_type": 2 00:11:00.049 }, 00:11:00.049 { 00:11:00.049 "dma_device_id": "system", 00:11:00.049 "dma_device_type": 1 00:11:00.049 }, 00:11:00.049 { 00:11:00.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.049 "dma_device_type": 2 00:11:00.049 } 00:11:00.049 ], 00:11:00.049 "driver_specific": { 00:11:00.049 "raid": { 00:11:00.049 "uuid": "7b52e6a3-5fa2-4fd2-83e4-b6e8b6f9e7b9", 00:11:00.049 "strip_size_kb": 64, 00:11:00.049 "state": "online", 00:11:00.049 "raid_level": "concat", 00:11:00.049 "superblock": true, 00:11:00.049 "num_base_bdevs": 4, 00:11:00.049 "num_base_bdevs_discovered": 4, 00:11:00.049 "num_base_bdevs_operational": 4, 00:11:00.049 "base_bdevs_list": [ 00:11:00.049 { 00:11:00.049 "name": "BaseBdev1", 00:11:00.049 "uuid": "08b8a5d1-a16a-4f9c-b62b-a90a68375416", 00:11:00.049 "is_configured": true, 00:11:00.049 "data_offset": 2048, 00:11:00.049 "data_size": 63488 00:11:00.049 }, 00:11:00.049 { 00:11:00.049 "name": "BaseBdev2", 00:11:00.049 "uuid": "0815908c-94fa-47a6-b693-23e28dca5440", 00:11:00.049 "is_configured": true, 00:11:00.049 "data_offset": 2048, 00:11:00.049 "data_size": 63488 00:11:00.049 }, 00:11:00.049 { 00:11:00.049 "name": "BaseBdev3", 00:11:00.049 "uuid": "ab7247fc-e048-41d8-8dfd-10ce809aa6c4", 00:11:00.049 "is_configured": true, 00:11:00.049 "data_offset": 2048, 00:11:00.049 "data_size": 63488 00:11:00.049 }, 00:11:00.049 { 00:11:00.049 "name": "BaseBdev4", 00:11:00.049 "uuid": "fd3b6d9c-2e21-4455-9c82-16633328daa1", 00:11:00.049 "is_configured": true, 00:11:00.049 "data_offset": 2048, 00:11:00.049 "data_size": 63488 00:11:00.049 } 00:11:00.049 ] 00:11:00.049 } 00:11:00.049 } 00:11:00.049 }' 00:11:00.049 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.049 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:00.049 BaseBdev2 00:11:00.049 BaseBdev3 00:11:00.049 BaseBdev4' 00:11:00.049 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.049 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:00.049 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.049 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:00.049 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.049 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.049 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.049 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.049 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.049 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.049 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.050 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:00.050 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.050 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.050 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.050 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.050 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.050 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.050 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.050 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:00.050 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.050 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.050 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.050 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.308 [2024-11-27 11:49:26.498270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:00.308 [2024-11-27 11:49:26.498346] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.308 [2024-11-27 11:49:26.498422] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.308 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.308 "name": "Existed_Raid", 00:11:00.308 "uuid": "7b52e6a3-5fa2-4fd2-83e4-b6e8b6f9e7b9", 00:11:00.308 "strip_size_kb": 64, 00:11:00.308 "state": "offline", 00:11:00.308 "raid_level": "concat", 00:11:00.308 "superblock": true, 00:11:00.308 "num_base_bdevs": 4, 00:11:00.308 "num_base_bdevs_discovered": 3, 00:11:00.308 "num_base_bdevs_operational": 3, 00:11:00.308 "base_bdevs_list": [ 00:11:00.308 { 00:11:00.308 "name": null, 00:11:00.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.308 "is_configured": false, 00:11:00.308 "data_offset": 0, 00:11:00.308 "data_size": 63488 00:11:00.308 }, 00:11:00.308 { 00:11:00.308 "name": "BaseBdev2", 00:11:00.308 "uuid": "0815908c-94fa-47a6-b693-23e28dca5440", 00:11:00.308 "is_configured": true, 00:11:00.308 "data_offset": 2048, 00:11:00.308 "data_size": 63488 00:11:00.308 }, 00:11:00.308 { 00:11:00.308 "name": "BaseBdev3", 00:11:00.308 "uuid": "ab7247fc-e048-41d8-8dfd-10ce809aa6c4", 00:11:00.308 "is_configured": true, 00:11:00.308 "data_offset": 2048, 00:11:00.308 "data_size": 63488 00:11:00.308 }, 00:11:00.308 { 00:11:00.308 "name": "BaseBdev4", 00:11:00.309 "uuid": "fd3b6d9c-2e21-4455-9c82-16633328daa1", 00:11:00.309 "is_configured": true, 00:11:00.309 "data_offset": 2048, 00:11:00.309 "data_size": 63488 00:11:00.309 } 00:11:00.309 ] 00:11:00.309 }' 00:11:00.309 11:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.309 11:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.874 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:00.874 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:00.874 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.874 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.874 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:00.874 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.874 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.874 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:00.874 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:00.874 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:00.874 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.874 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.874 [2024-11-27 11:49:27.156447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.133 [2024-11-27 11:49:27.317207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.133 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.133 [2024-11-27 11:49:27.477315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:01.133 [2024-11-27 11:49:27.477420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.393 BaseBdev2 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.393 [ 00:11:01.393 { 00:11:01.393 "name": "BaseBdev2", 00:11:01.393 "aliases": [ 00:11:01.393 "41fb10a0-ee40-4bf1-8dd0-bb5a18b0e7ca" 00:11:01.393 ], 00:11:01.393 "product_name": "Malloc disk", 00:11:01.393 "block_size": 512, 00:11:01.393 "num_blocks": 65536, 00:11:01.393 "uuid": "41fb10a0-ee40-4bf1-8dd0-bb5a18b0e7ca", 00:11:01.393 "assigned_rate_limits": { 00:11:01.393 "rw_ios_per_sec": 0, 00:11:01.393 "rw_mbytes_per_sec": 0, 00:11:01.393 "r_mbytes_per_sec": 0, 00:11:01.393 "w_mbytes_per_sec": 0 00:11:01.393 }, 00:11:01.393 "claimed": false, 00:11:01.393 "zoned": false, 00:11:01.393 "supported_io_types": { 00:11:01.393 "read": true, 00:11:01.393 "write": true, 00:11:01.393 "unmap": true, 00:11:01.393 "flush": true, 00:11:01.393 "reset": true, 00:11:01.393 "nvme_admin": false, 00:11:01.393 "nvme_io": false, 00:11:01.393 "nvme_io_md": false, 00:11:01.393 "write_zeroes": true, 00:11:01.393 "zcopy": true, 00:11:01.393 "get_zone_info": false, 00:11:01.393 "zone_management": false, 00:11:01.393 "zone_append": false, 00:11:01.393 "compare": false, 00:11:01.393 "compare_and_write": false, 00:11:01.393 "abort": true, 00:11:01.393 "seek_hole": false, 00:11:01.393 "seek_data": false, 00:11:01.393 "copy": true, 00:11:01.393 "nvme_iov_md": false 00:11:01.393 }, 00:11:01.393 "memory_domains": [ 00:11:01.393 { 00:11:01.393 "dma_device_id": "system", 00:11:01.393 "dma_device_type": 1 00:11:01.393 }, 00:11:01.393 { 00:11:01.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.393 "dma_device_type": 2 00:11:01.393 } 00:11:01.393 ], 00:11:01.393 "driver_specific": {} 00:11:01.393 } 00:11:01.393 ] 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.393 BaseBdev3 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:01.393 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:01.394 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.394 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:01.394 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.394 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.394 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.394 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.394 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.653 [ 00:11:01.653 { 00:11:01.653 "name": "BaseBdev3", 00:11:01.653 "aliases": [ 00:11:01.653 "5ca31d80-42e9-44e7-8d2e-e84c0b78ee9a" 00:11:01.653 ], 00:11:01.653 "product_name": "Malloc disk", 00:11:01.653 "block_size": 512, 00:11:01.653 "num_blocks": 65536, 00:11:01.653 "uuid": "5ca31d80-42e9-44e7-8d2e-e84c0b78ee9a", 00:11:01.653 "assigned_rate_limits": { 00:11:01.653 "rw_ios_per_sec": 0, 00:11:01.653 "rw_mbytes_per_sec": 0, 00:11:01.653 "r_mbytes_per_sec": 0, 00:11:01.653 "w_mbytes_per_sec": 0 00:11:01.653 }, 00:11:01.653 "claimed": false, 00:11:01.653 "zoned": false, 00:11:01.653 "supported_io_types": { 00:11:01.653 "read": true, 00:11:01.653 "write": true, 00:11:01.653 "unmap": true, 00:11:01.653 "flush": true, 00:11:01.653 "reset": true, 00:11:01.653 "nvme_admin": false, 00:11:01.653 "nvme_io": false, 00:11:01.653 "nvme_io_md": false, 00:11:01.653 "write_zeroes": true, 00:11:01.653 "zcopy": true, 00:11:01.653 "get_zone_info": false, 00:11:01.653 "zone_management": false, 00:11:01.653 "zone_append": false, 00:11:01.653 "compare": false, 00:11:01.653 "compare_and_write": false, 00:11:01.653 "abort": true, 00:11:01.653 "seek_hole": false, 00:11:01.653 "seek_data": false, 00:11:01.653 "copy": true, 00:11:01.653 "nvme_iov_md": false 00:11:01.653 }, 00:11:01.653 "memory_domains": [ 00:11:01.653 { 00:11:01.653 "dma_device_id": "system", 00:11:01.653 "dma_device_type": 1 00:11:01.653 }, 00:11:01.653 { 00:11:01.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.653 "dma_device_type": 2 00:11:01.653 } 00:11:01.653 ], 00:11:01.653 "driver_specific": {} 00:11:01.653 } 00:11:01.653 ] 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.653 BaseBdev4 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.653 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.654 [ 00:11:01.654 { 00:11:01.654 "name": "BaseBdev4", 00:11:01.654 "aliases": [ 00:11:01.654 "1f0b8ab2-6239-4251-97cf-a8f00501de1f" 00:11:01.654 ], 00:11:01.654 "product_name": "Malloc disk", 00:11:01.654 "block_size": 512, 00:11:01.654 "num_blocks": 65536, 00:11:01.654 "uuid": "1f0b8ab2-6239-4251-97cf-a8f00501de1f", 00:11:01.654 "assigned_rate_limits": { 00:11:01.654 "rw_ios_per_sec": 0, 00:11:01.654 "rw_mbytes_per_sec": 0, 00:11:01.654 "r_mbytes_per_sec": 0, 00:11:01.654 "w_mbytes_per_sec": 0 00:11:01.654 }, 00:11:01.654 "claimed": false, 00:11:01.654 "zoned": false, 00:11:01.654 "supported_io_types": { 00:11:01.654 "read": true, 00:11:01.654 "write": true, 00:11:01.654 "unmap": true, 00:11:01.654 "flush": true, 00:11:01.654 "reset": true, 00:11:01.654 "nvme_admin": false, 00:11:01.654 "nvme_io": false, 00:11:01.654 "nvme_io_md": false, 00:11:01.654 "write_zeroes": true, 00:11:01.654 "zcopy": true, 00:11:01.654 "get_zone_info": false, 00:11:01.654 "zone_management": false, 00:11:01.654 "zone_append": false, 00:11:01.654 "compare": false, 00:11:01.654 "compare_and_write": false, 00:11:01.654 "abort": true, 00:11:01.654 "seek_hole": false, 00:11:01.654 "seek_data": false, 00:11:01.654 "copy": true, 00:11:01.654 "nvme_iov_md": false 00:11:01.654 }, 00:11:01.654 "memory_domains": [ 00:11:01.654 { 00:11:01.654 "dma_device_id": "system", 00:11:01.654 "dma_device_type": 1 00:11:01.654 }, 00:11:01.654 { 00:11:01.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.654 "dma_device_type": 2 00:11:01.654 } 00:11:01.654 ], 00:11:01.654 "driver_specific": {} 00:11:01.654 } 00:11:01.654 ] 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.654 [2024-11-27 11:49:27.888666] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:01.654 [2024-11-27 11:49:27.888787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:01.654 [2024-11-27 11:49:27.888848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.654 [2024-11-27 11:49:27.890883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:01.654 [2024-11-27 11:49:27.890979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.654 "name": "Existed_Raid", 00:11:01.654 "uuid": "0e5022da-8ee5-4b56-be3b-c49235f91711", 00:11:01.654 "strip_size_kb": 64, 00:11:01.654 "state": "configuring", 00:11:01.654 "raid_level": "concat", 00:11:01.654 "superblock": true, 00:11:01.654 "num_base_bdevs": 4, 00:11:01.654 "num_base_bdevs_discovered": 3, 00:11:01.654 "num_base_bdevs_operational": 4, 00:11:01.654 "base_bdevs_list": [ 00:11:01.654 { 00:11:01.654 "name": "BaseBdev1", 00:11:01.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.654 "is_configured": false, 00:11:01.654 "data_offset": 0, 00:11:01.654 "data_size": 0 00:11:01.654 }, 00:11:01.654 { 00:11:01.654 "name": "BaseBdev2", 00:11:01.654 "uuid": "41fb10a0-ee40-4bf1-8dd0-bb5a18b0e7ca", 00:11:01.654 "is_configured": true, 00:11:01.654 "data_offset": 2048, 00:11:01.654 "data_size": 63488 00:11:01.654 }, 00:11:01.654 { 00:11:01.654 "name": "BaseBdev3", 00:11:01.654 "uuid": "5ca31d80-42e9-44e7-8d2e-e84c0b78ee9a", 00:11:01.654 "is_configured": true, 00:11:01.654 "data_offset": 2048, 00:11:01.654 "data_size": 63488 00:11:01.654 }, 00:11:01.654 { 00:11:01.654 "name": "BaseBdev4", 00:11:01.654 "uuid": "1f0b8ab2-6239-4251-97cf-a8f00501de1f", 00:11:01.654 "is_configured": true, 00:11:01.654 "data_offset": 2048, 00:11:01.654 "data_size": 63488 00:11:01.654 } 00:11:01.654 ] 00:11:01.654 }' 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.654 11:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.224 [2024-11-27 11:49:28.303964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.224 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.224 "name": "Existed_Raid", 00:11:02.225 "uuid": "0e5022da-8ee5-4b56-be3b-c49235f91711", 00:11:02.225 "strip_size_kb": 64, 00:11:02.225 "state": "configuring", 00:11:02.225 "raid_level": "concat", 00:11:02.225 "superblock": true, 00:11:02.225 "num_base_bdevs": 4, 00:11:02.225 "num_base_bdevs_discovered": 2, 00:11:02.225 "num_base_bdevs_operational": 4, 00:11:02.225 "base_bdevs_list": [ 00:11:02.225 { 00:11:02.225 "name": "BaseBdev1", 00:11:02.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.225 "is_configured": false, 00:11:02.225 "data_offset": 0, 00:11:02.225 "data_size": 0 00:11:02.225 }, 00:11:02.225 { 00:11:02.225 "name": null, 00:11:02.225 "uuid": "41fb10a0-ee40-4bf1-8dd0-bb5a18b0e7ca", 00:11:02.225 "is_configured": false, 00:11:02.225 "data_offset": 0, 00:11:02.225 "data_size": 63488 00:11:02.225 }, 00:11:02.225 { 00:11:02.225 "name": "BaseBdev3", 00:11:02.225 "uuid": "5ca31d80-42e9-44e7-8d2e-e84c0b78ee9a", 00:11:02.225 "is_configured": true, 00:11:02.225 "data_offset": 2048, 00:11:02.225 "data_size": 63488 00:11:02.225 }, 00:11:02.225 { 00:11:02.225 "name": "BaseBdev4", 00:11:02.225 "uuid": "1f0b8ab2-6239-4251-97cf-a8f00501de1f", 00:11:02.225 "is_configured": true, 00:11:02.225 "data_offset": 2048, 00:11:02.225 "data_size": 63488 00:11:02.225 } 00:11:02.225 ] 00:11:02.225 }' 00:11:02.225 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.225 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.485 [2024-11-27 11:49:28.800146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.485 BaseBdev1 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.485 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.485 [ 00:11:02.485 { 00:11:02.485 "name": "BaseBdev1", 00:11:02.485 "aliases": [ 00:11:02.485 "cc35072a-2144-4c09-8387-c301d0731d85" 00:11:02.485 ], 00:11:02.485 "product_name": "Malloc disk", 00:11:02.485 "block_size": 512, 00:11:02.485 "num_blocks": 65536, 00:11:02.485 "uuid": "cc35072a-2144-4c09-8387-c301d0731d85", 00:11:02.485 "assigned_rate_limits": { 00:11:02.485 "rw_ios_per_sec": 0, 00:11:02.485 "rw_mbytes_per_sec": 0, 00:11:02.485 "r_mbytes_per_sec": 0, 00:11:02.485 "w_mbytes_per_sec": 0 00:11:02.485 }, 00:11:02.485 "claimed": true, 00:11:02.485 "claim_type": "exclusive_write", 00:11:02.485 "zoned": false, 00:11:02.485 "supported_io_types": { 00:11:02.485 "read": true, 00:11:02.485 "write": true, 00:11:02.485 "unmap": true, 00:11:02.485 "flush": true, 00:11:02.485 "reset": true, 00:11:02.485 "nvme_admin": false, 00:11:02.485 "nvme_io": false, 00:11:02.485 "nvme_io_md": false, 00:11:02.485 "write_zeroes": true, 00:11:02.485 "zcopy": true, 00:11:02.485 "get_zone_info": false, 00:11:02.485 "zone_management": false, 00:11:02.485 "zone_append": false, 00:11:02.485 "compare": false, 00:11:02.485 "compare_and_write": false, 00:11:02.485 "abort": true, 00:11:02.485 "seek_hole": false, 00:11:02.485 "seek_data": false, 00:11:02.485 "copy": true, 00:11:02.485 "nvme_iov_md": false 00:11:02.485 }, 00:11:02.485 "memory_domains": [ 00:11:02.485 { 00:11:02.485 "dma_device_id": "system", 00:11:02.485 "dma_device_type": 1 00:11:02.485 }, 00:11:02.485 { 00:11:02.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.485 "dma_device_type": 2 00:11:02.485 } 00:11:02.485 ], 00:11:02.485 "driver_specific": {} 00:11:02.485 } 00:11:02.485 ] 00:11:02.486 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.486 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:02.486 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:02.486 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.486 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.486 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.486 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.486 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.486 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.486 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.486 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.486 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.486 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.486 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.486 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.486 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.486 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.745 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.745 "name": "Existed_Raid", 00:11:02.745 "uuid": "0e5022da-8ee5-4b56-be3b-c49235f91711", 00:11:02.745 "strip_size_kb": 64, 00:11:02.745 "state": "configuring", 00:11:02.745 "raid_level": "concat", 00:11:02.745 "superblock": true, 00:11:02.745 "num_base_bdevs": 4, 00:11:02.745 "num_base_bdevs_discovered": 3, 00:11:02.745 "num_base_bdevs_operational": 4, 00:11:02.745 "base_bdevs_list": [ 00:11:02.745 { 00:11:02.745 "name": "BaseBdev1", 00:11:02.745 "uuid": "cc35072a-2144-4c09-8387-c301d0731d85", 00:11:02.745 "is_configured": true, 00:11:02.745 "data_offset": 2048, 00:11:02.745 "data_size": 63488 00:11:02.745 }, 00:11:02.745 { 00:11:02.745 "name": null, 00:11:02.745 "uuid": "41fb10a0-ee40-4bf1-8dd0-bb5a18b0e7ca", 00:11:02.745 "is_configured": false, 00:11:02.745 "data_offset": 0, 00:11:02.745 "data_size": 63488 00:11:02.745 }, 00:11:02.745 { 00:11:02.745 "name": "BaseBdev3", 00:11:02.745 "uuid": "5ca31d80-42e9-44e7-8d2e-e84c0b78ee9a", 00:11:02.745 "is_configured": true, 00:11:02.745 "data_offset": 2048, 00:11:02.745 "data_size": 63488 00:11:02.745 }, 00:11:02.745 { 00:11:02.745 "name": "BaseBdev4", 00:11:02.745 "uuid": "1f0b8ab2-6239-4251-97cf-a8f00501de1f", 00:11:02.745 "is_configured": true, 00:11:02.745 "data_offset": 2048, 00:11:02.745 "data_size": 63488 00:11:02.745 } 00:11:02.745 ] 00:11:02.745 }' 00:11:02.745 11:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.745 11:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.004 [2024-11-27 11:49:29.331374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.004 "name": "Existed_Raid", 00:11:03.004 "uuid": "0e5022da-8ee5-4b56-be3b-c49235f91711", 00:11:03.004 "strip_size_kb": 64, 00:11:03.004 "state": "configuring", 00:11:03.004 "raid_level": "concat", 00:11:03.004 "superblock": true, 00:11:03.004 "num_base_bdevs": 4, 00:11:03.004 "num_base_bdevs_discovered": 2, 00:11:03.004 "num_base_bdevs_operational": 4, 00:11:03.004 "base_bdevs_list": [ 00:11:03.004 { 00:11:03.004 "name": "BaseBdev1", 00:11:03.004 "uuid": "cc35072a-2144-4c09-8387-c301d0731d85", 00:11:03.004 "is_configured": true, 00:11:03.004 "data_offset": 2048, 00:11:03.004 "data_size": 63488 00:11:03.004 }, 00:11:03.004 { 00:11:03.004 "name": null, 00:11:03.004 "uuid": "41fb10a0-ee40-4bf1-8dd0-bb5a18b0e7ca", 00:11:03.004 "is_configured": false, 00:11:03.004 "data_offset": 0, 00:11:03.004 "data_size": 63488 00:11:03.004 }, 00:11:03.004 { 00:11:03.004 "name": null, 00:11:03.004 "uuid": "5ca31d80-42e9-44e7-8d2e-e84c0b78ee9a", 00:11:03.004 "is_configured": false, 00:11:03.004 "data_offset": 0, 00:11:03.004 "data_size": 63488 00:11:03.004 }, 00:11:03.004 { 00:11:03.004 "name": "BaseBdev4", 00:11:03.004 "uuid": "1f0b8ab2-6239-4251-97cf-a8f00501de1f", 00:11:03.004 "is_configured": true, 00:11:03.004 "data_offset": 2048, 00:11:03.004 "data_size": 63488 00:11:03.004 } 00:11:03.004 ] 00:11:03.004 }' 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.004 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.573 [2024-11-27 11:49:29.810542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.573 "name": "Existed_Raid", 00:11:03.573 "uuid": "0e5022da-8ee5-4b56-be3b-c49235f91711", 00:11:03.573 "strip_size_kb": 64, 00:11:03.573 "state": "configuring", 00:11:03.573 "raid_level": "concat", 00:11:03.573 "superblock": true, 00:11:03.573 "num_base_bdevs": 4, 00:11:03.573 "num_base_bdevs_discovered": 3, 00:11:03.573 "num_base_bdevs_operational": 4, 00:11:03.573 "base_bdevs_list": [ 00:11:03.573 { 00:11:03.573 "name": "BaseBdev1", 00:11:03.573 "uuid": "cc35072a-2144-4c09-8387-c301d0731d85", 00:11:03.573 "is_configured": true, 00:11:03.573 "data_offset": 2048, 00:11:03.573 "data_size": 63488 00:11:03.573 }, 00:11:03.573 { 00:11:03.573 "name": null, 00:11:03.573 "uuid": "41fb10a0-ee40-4bf1-8dd0-bb5a18b0e7ca", 00:11:03.573 "is_configured": false, 00:11:03.573 "data_offset": 0, 00:11:03.573 "data_size": 63488 00:11:03.573 }, 00:11:03.573 { 00:11:03.573 "name": "BaseBdev3", 00:11:03.573 "uuid": "5ca31d80-42e9-44e7-8d2e-e84c0b78ee9a", 00:11:03.573 "is_configured": true, 00:11:03.573 "data_offset": 2048, 00:11:03.573 "data_size": 63488 00:11:03.573 }, 00:11:03.573 { 00:11:03.573 "name": "BaseBdev4", 00:11:03.573 "uuid": "1f0b8ab2-6239-4251-97cf-a8f00501de1f", 00:11:03.573 "is_configured": true, 00:11:03.573 "data_offset": 2048, 00:11:03.573 "data_size": 63488 00:11:03.573 } 00:11:03.573 ] 00:11:03.573 }' 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.573 11:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.142 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.142 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.142 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.142 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:04.142 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.142 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.143 [2024-11-27 11:49:30.337701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.143 "name": "Existed_Raid", 00:11:04.143 "uuid": "0e5022da-8ee5-4b56-be3b-c49235f91711", 00:11:04.143 "strip_size_kb": 64, 00:11:04.143 "state": "configuring", 00:11:04.143 "raid_level": "concat", 00:11:04.143 "superblock": true, 00:11:04.143 "num_base_bdevs": 4, 00:11:04.143 "num_base_bdevs_discovered": 2, 00:11:04.143 "num_base_bdevs_operational": 4, 00:11:04.143 "base_bdevs_list": [ 00:11:04.143 { 00:11:04.143 "name": null, 00:11:04.143 "uuid": "cc35072a-2144-4c09-8387-c301d0731d85", 00:11:04.143 "is_configured": false, 00:11:04.143 "data_offset": 0, 00:11:04.143 "data_size": 63488 00:11:04.143 }, 00:11:04.143 { 00:11:04.143 "name": null, 00:11:04.143 "uuid": "41fb10a0-ee40-4bf1-8dd0-bb5a18b0e7ca", 00:11:04.143 "is_configured": false, 00:11:04.143 "data_offset": 0, 00:11:04.143 "data_size": 63488 00:11:04.143 }, 00:11:04.143 { 00:11:04.143 "name": "BaseBdev3", 00:11:04.143 "uuid": "5ca31d80-42e9-44e7-8d2e-e84c0b78ee9a", 00:11:04.143 "is_configured": true, 00:11:04.143 "data_offset": 2048, 00:11:04.143 "data_size": 63488 00:11:04.143 }, 00:11:04.143 { 00:11:04.143 "name": "BaseBdev4", 00:11:04.143 "uuid": "1f0b8ab2-6239-4251-97cf-a8f00501de1f", 00:11:04.143 "is_configured": true, 00:11:04.143 "data_offset": 2048, 00:11:04.143 "data_size": 63488 00:11:04.143 } 00:11:04.143 ] 00:11:04.143 }' 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.143 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.713 [2024-11-27 11:49:30.903288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.713 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.713 "name": "Existed_Raid", 00:11:04.713 "uuid": "0e5022da-8ee5-4b56-be3b-c49235f91711", 00:11:04.713 "strip_size_kb": 64, 00:11:04.713 "state": "configuring", 00:11:04.713 "raid_level": "concat", 00:11:04.713 "superblock": true, 00:11:04.713 "num_base_bdevs": 4, 00:11:04.713 "num_base_bdevs_discovered": 3, 00:11:04.713 "num_base_bdevs_operational": 4, 00:11:04.713 "base_bdevs_list": [ 00:11:04.713 { 00:11:04.713 "name": null, 00:11:04.713 "uuid": "cc35072a-2144-4c09-8387-c301d0731d85", 00:11:04.713 "is_configured": false, 00:11:04.713 "data_offset": 0, 00:11:04.713 "data_size": 63488 00:11:04.713 }, 00:11:04.713 { 00:11:04.713 "name": "BaseBdev2", 00:11:04.713 "uuid": "41fb10a0-ee40-4bf1-8dd0-bb5a18b0e7ca", 00:11:04.713 "is_configured": true, 00:11:04.713 "data_offset": 2048, 00:11:04.713 "data_size": 63488 00:11:04.713 }, 00:11:04.714 { 00:11:04.714 "name": "BaseBdev3", 00:11:04.714 "uuid": "5ca31d80-42e9-44e7-8d2e-e84c0b78ee9a", 00:11:04.714 "is_configured": true, 00:11:04.714 "data_offset": 2048, 00:11:04.714 "data_size": 63488 00:11:04.714 }, 00:11:04.714 { 00:11:04.714 "name": "BaseBdev4", 00:11:04.714 "uuid": "1f0b8ab2-6239-4251-97cf-a8f00501de1f", 00:11:04.714 "is_configured": true, 00:11:04.714 "data_offset": 2048, 00:11:04.714 "data_size": 63488 00:11:04.714 } 00:11:04.714 ] 00:11:04.714 }' 00:11:04.714 11:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.714 11:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.973 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:04.973 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.973 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.973 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.232 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.232 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:05.232 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.232 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cc35072a-2144-4c09-8387-c301d0731d85 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.233 [2024-11-27 11:49:31.484582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:05.233 [2024-11-27 11:49:31.484868] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:05.233 [2024-11-27 11:49:31.484883] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:05.233 [2024-11-27 11:49:31.485142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:05.233 NewBaseBdev 00:11:05.233 [2024-11-27 11:49:31.485301] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:05.233 [2024-11-27 11:49:31.485319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:05.233 [2024-11-27 11:49:31.485471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.233 [ 00:11:05.233 { 00:11:05.233 "name": "NewBaseBdev", 00:11:05.233 "aliases": [ 00:11:05.233 "cc35072a-2144-4c09-8387-c301d0731d85" 00:11:05.233 ], 00:11:05.233 "product_name": "Malloc disk", 00:11:05.233 "block_size": 512, 00:11:05.233 "num_blocks": 65536, 00:11:05.233 "uuid": "cc35072a-2144-4c09-8387-c301d0731d85", 00:11:05.233 "assigned_rate_limits": { 00:11:05.233 "rw_ios_per_sec": 0, 00:11:05.233 "rw_mbytes_per_sec": 0, 00:11:05.233 "r_mbytes_per_sec": 0, 00:11:05.233 "w_mbytes_per_sec": 0 00:11:05.233 }, 00:11:05.233 "claimed": true, 00:11:05.233 "claim_type": "exclusive_write", 00:11:05.233 "zoned": false, 00:11:05.233 "supported_io_types": { 00:11:05.233 "read": true, 00:11:05.233 "write": true, 00:11:05.233 "unmap": true, 00:11:05.233 "flush": true, 00:11:05.233 "reset": true, 00:11:05.233 "nvme_admin": false, 00:11:05.233 "nvme_io": false, 00:11:05.233 "nvme_io_md": false, 00:11:05.233 "write_zeroes": true, 00:11:05.233 "zcopy": true, 00:11:05.233 "get_zone_info": false, 00:11:05.233 "zone_management": false, 00:11:05.233 "zone_append": false, 00:11:05.233 "compare": false, 00:11:05.233 "compare_and_write": false, 00:11:05.233 "abort": true, 00:11:05.233 "seek_hole": false, 00:11:05.233 "seek_data": false, 00:11:05.233 "copy": true, 00:11:05.233 "nvme_iov_md": false 00:11:05.233 }, 00:11:05.233 "memory_domains": [ 00:11:05.233 { 00:11:05.233 "dma_device_id": "system", 00:11:05.233 "dma_device_type": 1 00:11:05.233 }, 00:11:05.233 { 00:11:05.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.233 "dma_device_type": 2 00:11:05.233 } 00:11:05.233 ], 00:11:05.233 "driver_specific": {} 00:11:05.233 } 00:11:05.233 ] 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.233 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.233 "name": "Existed_Raid", 00:11:05.233 "uuid": "0e5022da-8ee5-4b56-be3b-c49235f91711", 00:11:05.233 "strip_size_kb": 64, 00:11:05.233 "state": "online", 00:11:05.233 "raid_level": "concat", 00:11:05.233 "superblock": true, 00:11:05.233 "num_base_bdevs": 4, 00:11:05.233 "num_base_bdevs_discovered": 4, 00:11:05.233 "num_base_bdevs_operational": 4, 00:11:05.233 "base_bdevs_list": [ 00:11:05.233 { 00:11:05.233 "name": "NewBaseBdev", 00:11:05.233 "uuid": "cc35072a-2144-4c09-8387-c301d0731d85", 00:11:05.233 "is_configured": true, 00:11:05.233 "data_offset": 2048, 00:11:05.233 "data_size": 63488 00:11:05.233 }, 00:11:05.233 { 00:11:05.234 "name": "BaseBdev2", 00:11:05.234 "uuid": "41fb10a0-ee40-4bf1-8dd0-bb5a18b0e7ca", 00:11:05.234 "is_configured": true, 00:11:05.234 "data_offset": 2048, 00:11:05.234 "data_size": 63488 00:11:05.234 }, 00:11:05.234 { 00:11:05.234 "name": "BaseBdev3", 00:11:05.234 "uuid": "5ca31d80-42e9-44e7-8d2e-e84c0b78ee9a", 00:11:05.234 "is_configured": true, 00:11:05.234 "data_offset": 2048, 00:11:05.234 "data_size": 63488 00:11:05.234 }, 00:11:05.234 { 00:11:05.234 "name": "BaseBdev4", 00:11:05.234 "uuid": "1f0b8ab2-6239-4251-97cf-a8f00501de1f", 00:11:05.234 "is_configured": true, 00:11:05.234 "data_offset": 2048, 00:11:05.234 "data_size": 63488 00:11:05.234 } 00:11:05.234 ] 00:11:05.234 }' 00:11:05.234 11:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.234 11:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.801 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:05.801 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:05.801 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.801 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.801 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.801 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.801 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:05.801 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.801 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.801 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.801 [2024-11-27 11:49:32.024178] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.801 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.801 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.801 "name": "Existed_Raid", 00:11:05.801 "aliases": [ 00:11:05.801 "0e5022da-8ee5-4b56-be3b-c49235f91711" 00:11:05.801 ], 00:11:05.802 "product_name": "Raid Volume", 00:11:05.802 "block_size": 512, 00:11:05.802 "num_blocks": 253952, 00:11:05.802 "uuid": "0e5022da-8ee5-4b56-be3b-c49235f91711", 00:11:05.802 "assigned_rate_limits": { 00:11:05.802 "rw_ios_per_sec": 0, 00:11:05.802 "rw_mbytes_per_sec": 0, 00:11:05.802 "r_mbytes_per_sec": 0, 00:11:05.802 "w_mbytes_per_sec": 0 00:11:05.802 }, 00:11:05.802 "claimed": false, 00:11:05.802 "zoned": false, 00:11:05.802 "supported_io_types": { 00:11:05.802 "read": true, 00:11:05.802 "write": true, 00:11:05.802 "unmap": true, 00:11:05.802 "flush": true, 00:11:05.802 "reset": true, 00:11:05.802 "nvme_admin": false, 00:11:05.802 "nvme_io": false, 00:11:05.802 "nvme_io_md": false, 00:11:05.802 "write_zeroes": true, 00:11:05.802 "zcopy": false, 00:11:05.802 "get_zone_info": false, 00:11:05.802 "zone_management": false, 00:11:05.802 "zone_append": false, 00:11:05.802 "compare": false, 00:11:05.802 "compare_and_write": false, 00:11:05.802 "abort": false, 00:11:05.802 "seek_hole": false, 00:11:05.802 "seek_data": false, 00:11:05.802 "copy": false, 00:11:05.802 "nvme_iov_md": false 00:11:05.802 }, 00:11:05.802 "memory_domains": [ 00:11:05.802 { 00:11:05.802 "dma_device_id": "system", 00:11:05.802 "dma_device_type": 1 00:11:05.802 }, 00:11:05.802 { 00:11:05.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.802 "dma_device_type": 2 00:11:05.802 }, 00:11:05.802 { 00:11:05.802 "dma_device_id": "system", 00:11:05.802 "dma_device_type": 1 00:11:05.802 }, 00:11:05.802 { 00:11:05.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.802 "dma_device_type": 2 00:11:05.802 }, 00:11:05.802 { 00:11:05.802 "dma_device_id": "system", 00:11:05.802 "dma_device_type": 1 00:11:05.802 }, 00:11:05.802 { 00:11:05.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.802 "dma_device_type": 2 00:11:05.802 }, 00:11:05.802 { 00:11:05.802 "dma_device_id": "system", 00:11:05.802 "dma_device_type": 1 00:11:05.802 }, 00:11:05.802 { 00:11:05.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.802 "dma_device_type": 2 00:11:05.802 } 00:11:05.802 ], 00:11:05.802 "driver_specific": { 00:11:05.802 "raid": { 00:11:05.802 "uuid": "0e5022da-8ee5-4b56-be3b-c49235f91711", 00:11:05.802 "strip_size_kb": 64, 00:11:05.802 "state": "online", 00:11:05.802 "raid_level": "concat", 00:11:05.802 "superblock": true, 00:11:05.802 "num_base_bdevs": 4, 00:11:05.802 "num_base_bdevs_discovered": 4, 00:11:05.802 "num_base_bdevs_operational": 4, 00:11:05.802 "base_bdevs_list": [ 00:11:05.802 { 00:11:05.802 "name": "NewBaseBdev", 00:11:05.802 "uuid": "cc35072a-2144-4c09-8387-c301d0731d85", 00:11:05.802 "is_configured": true, 00:11:05.802 "data_offset": 2048, 00:11:05.802 "data_size": 63488 00:11:05.802 }, 00:11:05.802 { 00:11:05.802 "name": "BaseBdev2", 00:11:05.802 "uuid": "41fb10a0-ee40-4bf1-8dd0-bb5a18b0e7ca", 00:11:05.802 "is_configured": true, 00:11:05.802 "data_offset": 2048, 00:11:05.802 "data_size": 63488 00:11:05.802 }, 00:11:05.802 { 00:11:05.802 "name": "BaseBdev3", 00:11:05.802 "uuid": "5ca31d80-42e9-44e7-8d2e-e84c0b78ee9a", 00:11:05.802 "is_configured": true, 00:11:05.802 "data_offset": 2048, 00:11:05.802 "data_size": 63488 00:11:05.802 }, 00:11:05.802 { 00:11:05.802 "name": "BaseBdev4", 00:11:05.802 "uuid": "1f0b8ab2-6239-4251-97cf-a8f00501de1f", 00:11:05.802 "is_configured": true, 00:11:05.802 "data_offset": 2048, 00:11:05.802 "data_size": 63488 00:11:05.802 } 00:11:05.802 ] 00:11:05.802 } 00:11:05.802 } 00:11:05.802 }' 00:11:05.802 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.802 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:05.802 BaseBdev2 00:11:05.802 BaseBdev3 00:11:05.802 BaseBdev4' 00:11:05.802 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.802 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.802 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.802 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:05.802 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.802 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.802 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.802 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.064 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.064 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.064 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.064 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:06.064 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.064 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.064 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.065 [2024-11-27 11:49:32.343222] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:06.065 [2024-11-27 11:49:32.343257] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.065 [2024-11-27 11:49:32.343356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.065 [2024-11-27 11:49:32.343443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.065 [2024-11-27 11:49:32.343455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71955 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71955 ']' 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71955 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71955 00:11:06.065 killing process with pid 71955 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71955' 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71955 00:11:06.065 [2024-11-27 11:49:32.384490] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.065 11:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71955 00:11:06.635 [2024-11-27 11:49:32.783533] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.574 11:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:07.574 00:11:07.574 real 0m11.854s 00:11:07.574 user 0m18.927s 00:11:07.574 sys 0m2.044s 00:11:07.574 11:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.574 11:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.574 ************************************ 00:11:07.574 END TEST raid_state_function_test_sb 00:11:07.574 ************************************ 00:11:07.833 11:49:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:07.834 11:49:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:07.834 11:49:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.834 11:49:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.834 ************************************ 00:11:07.834 START TEST raid_superblock_test 00:11:07.834 ************************************ 00:11:07.834 11:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:07.834 11:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:07.834 11:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:07.834 11:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:07.834 11:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:07.834 11:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:07.834 11:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:07.834 11:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:07.834 11:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:07.834 11:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:07.834 11:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:07.834 11:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:07.834 11:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:07.834 11:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:07.834 11:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:07.834 11:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:07.834 11:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:07.834 11:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72629 00:11:07.834 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:07.834 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72629 00:11:07.834 11:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72629 ']' 00:11:07.834 11:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.834 11:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.834 11:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.834 11:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.834 11:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.834 [2024-11-27 11:49:34.087187] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:11:07.834 [2024-11-27 11:49:34.087366] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72629 ] 00:11:08.093 [2024-11-27 11:49:34.262951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.093 [2024-11-27 11:49:34.381588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.352 [2024-11-27 11:49:34.587174] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.352 [2024-11-27 11:49:34.587337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.611 malloc1 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.611 [2024-11-27 11:49:34.978841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:08.611 [2024-11-27 11:49:34.978984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.611 [2024-11-27 11:49:34.979035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:08.611 [2024-11-27 11:49:34.979078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.611 [2024-11-27 11:49:34.981583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.611 [2024-11-27 11:49:34.981667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:08.611 pt1 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.611 11:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.873 malloc2 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.873 [2024-11-27 11:49:35.040176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:08.873 [2024-11-27 11:49:35.040242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.873 [2024-11-27 11:49:35.040272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:08.873 [2024-11-27 11:49:35.040282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.873 [2024-11-27 11:49:35.042436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.873 [2024-11-27 11:49:35.042471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:08.873 pt2 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.873 malloc3 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.873 [2024-11-27 11:49:35.109468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:08.873 [2024-11-27 11:49:35.109605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.873 [2024-11-27 11:49:35.109650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:08.873 [2024-11-27 11:49:35.109679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.873 [2024-11-27 11:49:35.111889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.873 [2024-11-27 11:49:35.111962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:08.873 pt3 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.873 malloc4 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.873 [2024-11-27 11:49:35.170425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:08.873 [2024-11-27 11:49:35.170525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.873 [2024-11-27 11:49:35.170568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:08.873 [2024-11-27 11:49:35.170596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.873 [2024-11-27 11:49:35.172876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.873 [2024-11-27 11:49:35.172939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:08.873 pt4 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.873 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.873 [2024-11-27 11:49:35.182452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:08.873 [2024-11-27 11:49:35.184418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:08.873 [2024-11-27 11:49:35.184553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:08.873 [2024-11-27 11:49:35.184626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:08.873 [2024-11-27 11:49:35.184862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:08.873 [2024-11-27 11:49:35.184928] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:08.873 [2024-11-27 11:49:35.185290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:08.873 [2024-11-27 11:49:35.185527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:08.873 [2024-11-27 11:49:35.185571] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:08.873 [2024-11-27 11:49:35.185824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.874 "name": "raid_bdev1", 00:11:08.874 "uuid": "64f831a6-3bc9-421d-9f7c-310c875193f3", 00:11:08.874 "strip_size_kb": 64, 00:11:08.874 "state": "online", 00:11:08.874 "raid_level": "concat", 00:11:08.874 "superblock": true, 00:11:08.874 "num_base_bdevs": 4, 00:11:08.874 "num_base_bdevs_discovered": 4, 00:11:08.874 "num_base_bdevs_operational": 4, 00:11:08.874 "base_bdevs_list": [ 00:11:08.874 { 00:11:08.874 "name": "pt1", 00:11:08.874 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.874 "is_configured": true, 00:11:08.874 "data_offset": 2048, 00:11:08.874 "data_size": 63488 00:11:08.874 }, 00:11:08.874 { 00:11:08.874 "name": "pt2", 00:11:08.874 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.874 "is_configured": true, 00:11:08.874 "data_offset": 2048, 00:11:08.874 "data_size": 63488 00:11:08.874 }, 00:11:08.874 { 00:11:08.874 "name": "pt3", 00:11:08.874 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.874 "is_configured": true, 00:11:08.874 "data_offset": 2048, 00:11:08.874 "data_size": 63488 00:11:08.874 }, 00:11:08.874 { 00:11:08.874 "name": "pt4", 00:11:08.874 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:08.874 "is_configured": true, 00:11:08.874 "data_offset": 2048, 00:11:08.874 "data_size": 63488 00:11:08.874 } 00:11:08.874 ] 00:11:08.874 }' 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.874 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.486 [2024-11-27 11:49:35.637992] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:09.486 "name": "raid_bdev1", 00:11:09.486 "aliases": [ 00:11:09.486 "64f831a6-3bc9-421d-9f7c-310c875193f3" 00:11:09.486 ], 00:11:09.486 "product_name": "Raid Volume", 00:11:09.486 "block_size": 512, 00:11:09.486 "num_blocks": 253952, 00:11:09.486 "uuid": "64f831a6-3bc9-421d-9f7c-310c875193f3", 00:11:09.486 "assigned_rate_limits": { 00:11:09.486 "rw_ios_per_sec": 0, 00:11:09.486 "rw_mbytes_per_sec": 0, 00:11:09.486 "r_mbytes_per_sec": 0, 00:11:09.486 "w_mbytes_per_sec": 0 00:11:09.486 }, 00:11:09.486 "claimed": false, 00:11:09.486 "zoned": false, 00:11:09.486 "supported_io_types": { 00:11:09.486 "read": true, 00:11:09.486 "write": true, 00:11:09.486 "unmap": true, 00:11:09.486 "flush": true, 00:11:09.486 "reset": true, 00:11:09.486 "nvme_admin": false, 00:11:09.486 "nvme_io": false, 00:11:09.486 "nvme_io_md": false, 00:11:09.486 "write_zeroes": true, 00:11:09.486 "zcopy": false, 00:11:09.486 "get_zone_info": false, 00:11:09.486 "zone_management": false, 00:11:09.486 "zone_append": false, 00:11:09.486 "compare": false, 00:11:09.486 "compare_and_write": false, 00:11:09.486 "abort": false, 00:11:09.486 "seek_hole": false, 00:11:09.486 "seek_data": false, 00:11:09.486 "copy": false, 00:11:09.486 "nvme_iov_md": false 00:11:09.486 }, 00:11:09.486 "memory_domains": [ 00:11:09.486 { 00:11:09.486 "dma_device_id": "system", 00:11:09.486 "dma_device_type": 1 00:11:09.486 }, 00:11:09.486 { 00:11:09.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.486 "dma_device_type": 2 00:11:09.486 }, 00:11:09.486 { 00:11:09.486 "dma_device_id": "system", 00:11:09.486 "dma_device_type": 1 00:11:09.486 }, 00:11:09.486 { 00:11:09.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.486 "dma_device_type": 2 00:11:09.486 }, 00:11:09.486 { 00:11:09.486 "dma_device_id": "system", 00:11:09.486 "dma_device_type": 1 00:11:09.486 }, 00:11:09.486 { 00:11:09.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.486 "dma_device_type": 2 00:11:09.486 }, 00:11:09.486 { 00:11:09.486 "dma_device_id": "system", 00:11:09.486 "dma_device_type": 1 00:11:09.486 }, 00:11:09.486 { 00:11:09.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.486 "dma_device_type": 2 00:11:09.486 } 00:11:09.486 ], 00:11:09.486 "driver_specific": { 00:11:09.486 "raid": { 00:11:09.486 "uuid": "64f831a6-3bc9-421d-9f7c-310c875193f3", 00:11:09.486 "strip_size_kb": 64, 00:11:09.486 "state": "online", 00:11:09.486 "raid_level": "concat", 00:11:09.486 "superblock": true, 00:11:09.486 "num_base_bdevs": 4, 00:11:09.486 "num_base_bdevs_discovered": 4, 00:11:09.486 "num_base_bdevs_operational": 4, 00:11:09.486 "base_bdevs_list": [ 00:11:09.486 { 00:11:09.486 "name": "pt1", 00:11:09.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.486 "is_configured": true, 00:11:09.486 "data_offset": 2048, 00:11:09.486 "data_size": 63488 00:11:09.486 }, 00:11:09.486 { 00:11:09.486 "name": "pt2", 00:11:09.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.486 "is_configured": true, 00:11:09.486 "data_offset": 2048, 00:11:09.486 "data_size": 63488 00:11:09.486 }, 00:11:09.486 { 00:11:09.486 "name": "pt3", 00:11:09.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.486 "is_configured": true, 00:11:09.486 "data_offset": 2048, 00:11:09.486 "data_size": 63488 00:11:09.486 }, 00:11:09.486 { 00:11:09.486 "name": "pt4", 00:11:09.486 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:09.486 "is_configured": true, 00:11:09.486 "data_offset": 2048, 00:11:09.486 "data_size": 63488 00:11:09.486 } 00:11:09.486 ] 00:11:09.486 } 00:11:09.486 } 00:11:09.486 }' 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:09.486 pt2 00:11:09.486 pt3 00:11:09.486 pt4' 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:09.486 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.487 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.487 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.487 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.487 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.487 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.487 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:09.487 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.487 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.487 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.747 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.747 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.747 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.747 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.747 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:09.747 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.747 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.747 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.747 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.747 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.747 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.747 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:09.747 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.747 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.747 11:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:09.747 [2024-11-27 11:49:35.969414] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.747 11:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=64f831a6-3bc9-421d-9f7c-310c875193f3 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 64f831a6-3bc9-421d-9f7c-310c875193f3 ']' 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.747 [2024-11-27 11:49:36.009011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:09.747 [2024-11-27 11:49:36.009041] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.747 [2024-11-27 11:49:36.009150] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.747 [2024-11-27 11:49:36.009221] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.747 [2024-11-27 11:49:36.009236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:09.747 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.748 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.748 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.748 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:09.748 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:09.748 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.748 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.748 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.748 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:09.748 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.748 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:09.748 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.007 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.007 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:10.007 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:10.007 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:10.007 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.008 [2024-11-27 11:49:36.172794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:10.008 [2024-11-27 11:49:36.174769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:10.008 [2024-11-27 11:49:36.174883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:10.008 [2024-11-27 11:49:36.174939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:10.008 [2024-11-27 11:49:36.175048] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:10.008 [2024-11-27 11:49:36.175155] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:10.008 [2024-11-27 11:49:36.175219] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:10.008 [2024-11-27 11:49:36.175297] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:10.008 [2024-11-27 11:49:36.175343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.008 [2024-11-27 11:49:36.175357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:10.008 request: 00:11:10.008 { 00:11:10.008 "name": "raid_bdev1", 00:11:10.008 "raid_level": "concat", 00:11:10.008 "base_bdevs": [ 00:11:10.008 "malloc1", 00:11:10.008 "malloc2", 00:11:10.008 "malloc3", 00:11:10.008 "malloc4" 00:11:10.008 ], 00:11:10.008 "strip_size_kb": 64, 00:11:10.008 "superblock": false, 00:11:10.008 "method": "bdev_raid_create", 00:11:10.008 "req_id": 1 00:11:10.008 } 00:11:10.008 Got JSON-RPC error response 00:11:10.008 response: 00:11:10.008 { 00:11:10.008 "code": -17, 00:11:10.008 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:10.008 } 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.008 [2024-11-27 11:49:36.232616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:10.008 [2024-11-27 11:49:36.232729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.008 [2024-11-27 11:49:36.232771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:10.008 [2024-11-27 11:49:36.232807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.008 [2024-11-27 11:49:36.235090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.008 [2024-11-27 11:49:36.235178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:10.008 [2024-11-27 11:49:36.235294] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:10.008 [2024-11-27 11:49:36.235382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:10.008 pt1 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.008 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.008 "name": "raid_bdev1", 00:11:10.008 "uuid": "64f831a6-3bc9-421d-9f7c-310c875193f3", 00:11:10.008 "strip_size_kb": 64, 00:11:10.008 "state": "configuring", 00:11:10.008 "raid_level": "concat", 00:11:10.008 "superblock": true, 00:11:10.008 "num_base_bdevs": 4, 00:11:10.008 "num_base_bdevs_discovered": 1, 00:11:10.008 "num_base_bdevs_operational": 4, 00:11:10.008 "base_bdevs_list": [ 00:11:10.008 { 00:11:10.008 "name": "pt1", 00:11:10.008 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.008 "is_configured": true, 00:11:10.008 "data_offset": 2048, 00:11:10.008 "data_size": 63488 00:11:10.008 }, 00:11:10.008 { 00:11:10.008 "name": null, 00:11:10.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.008 "is_configured": false, 00:11:10.008 "data_offset": 2048, 00:11:10.008 "data_size": 63488 00:11:10.008 }, 00:11:10.008 { 00:11:10.008 "name": null, 00:11:10.008 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:10.008 "is_configured": false, 00:11:10.008 "data_offset": 2048, 00:11:10.008 "data_size": 63488 00:11:10.008 }, 00:11:10.008 { 00:11:10.008 "name": null, 00:11:10.008 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:10.008 "is_configured": false, 00:11:10.008 "data_offset": 2048, 00:11:10.008 "data_size": 63488 00:11:10.008 } 00:11:10.008 ] 00:11:10.009 }' 00:11:10.009 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.009 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.269 [2024-11-27 11:49:36.628010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:10.269 [2024-11-27 11:49:36.628092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.269 [2024-11-27 11:49:36.628114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:10.269 [2024-11-27 11:49:36.628125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.269 [2024-11-27 11:49:36.628580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.269 [2024-11-27 11:49:36.628600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:10.269 [2024-11-27 11:49:36.628685] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:10.269 [2024-11-27 11:49:36.628710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:10.269 pt2 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.269 [2024-11-27 11:49:36.636034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.269 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.528 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.528 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.528 "name": "raid_bdev1", 00:11:10.528 "uuid": "64f831a6-3bc9-421d-9f7c-310c875193f3", 00:11:10.528 "strip_size_kb": 64, 00:11:10.528 "state": "configuring", 00:11:10.528 "raid_level": "concat", 00:11:10.528 "superblock": true, 00:11:10.528 "num_base_bdevs": 4, 00:11:10.528 "num_base_bdevs_discovered": 1, 00:11:10.528 "num_base_bdevs_operational": 4, 00:11:10.528 "base_bdevs_list": [ 00:11:10.528 { 00:11:10.528 "name": "pt1", 00:11:10.528 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.528 "is_configured": true, 00:11:10.528 "data_offset": 2048, 00:11:10.528 "data_size": 63488 00:11:10.528 }, 00:11:10.528 { 00:11:10.528 "name": null, 00:11:10.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.528 "is_configured": false, 00:11:10.528 "data_offset": 0, 00:11:10.528 "data_size": 63488 00:11:10.528 }, 00:11:10.528 { 00:11:10.528 "name": null, 00:11:10.528 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:10.528 "is_configured": false, 00:11:10.528 "data_offset": 2048, 00:11:10.528 "data_size": 63488 00:11:10.528 }, 00:11:10.528 { 00:11:10.528 "name": null, 00:11:10.528 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:10.528 "is_configured": false, 00:11:10.528 "data_offset": 2048, 00:11:10.528 "data_size": 63488 00:11:10.528 } 00:11:10.528 ] 00:11:10.528 }' 00:11:10.528 11:49:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.528 11:49:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.789 [2024-11-27 11:49:37.107282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:10.789 [2024-11-27 11:49:37.107428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.789 [2024-11-27 11:49:37.107476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:10.789 [2024-11-27 11:49:37.107610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.789 [2024-11-27 11:49:37.108167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.789 [2024-11-27 11:49:37.108233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:10.789 [2024-11-27 11:49:37.108363] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:10.789 [2024-11-27 11:49:37.108419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:10.789 pt2 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.789 [2024-11-27 11:49:37.119220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:10.789 [2024-11-27 11:49:37.119276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.789 [2024-11-27 11:49:37.119297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:10.789 [2024-11-27 11:49:37.119306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.789 [2024-11-27 11:49:37.119777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.789 [2024-11-27 11:49:37.119796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:10.789 [2024-11-27 11:49:37.119899] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:10.789 [2024-11-27 11:49:37.119943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:10.789 pt3 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.789 [2024-11-27 11:49:37.131164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:10.789 [2024-11-27 11:49:37.131209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.789 [2024-11-27 11:49:37.131226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:10.789 [2024-11-27 11:49:37.131234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.789 [2024-11-27 11:49:37.131686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.789 [2024-11-27 11:49:37.131704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:10.789 [2024-11-27 11:49:37.131783] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:10.789 [2024-11-27 11:49:37.131807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:10.789 [2024-11-27 11:49:37.131966] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:10.789 [2024-11-27 11:49:37.131976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:10.789 [2024-11-27 11:49:37.132245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:10.789 [2024-11-27 11:49:37.132423] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:10.789 [2024-11-27 11:49:37.132438] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:10.789 [2024-11-27 11:49:37.132586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.789 pt4 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.789 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:10.790 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:10.790 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:10.790 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.790 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.790 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.790 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.790 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.790 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.790 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.790 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.790 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.790 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.790 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.790 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.790 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.790 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.048 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.048 "name": "raid_bdev1", 00:11:11.048 "uuid": "64f831a6-3bc9-421d-9f7c-310c875193f3", 00:11:11.048 "strip_size_kb": 64, 00:11:11.048 "state": "online", 00:11:11.048 "raid_level": "concat", 00:11:11.048 "superblock": true, 00:11:11.048 "num_base_bdevs": 4, 00:11:11.048 "num_base_bdevs_discovered": 4, 00:11:11.048 "num_base_bdevs_operational": 4, 00:11:11.048 "base_bdevs_list": [ 00:11:11.048 { 00:11:11.048 "name": "pt1", 00:11:11.048 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.048 "is_configured": true, 00:11:11.048 "data_offset": 2048, 00:11:11.048 "data_size": 63488 00:11:11.048 }, 00:11:11.048 { 00:11:11.048 "name": "pt2", 00:11:11.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.048 "is_configured": true, 00:11:11.048 "data_offset": 2048, 00:11:11.048 "data_size": 63488 00:11:11.048 }, 00:11:11.048 { 00:11:11.048 "name": "pt3", 00:11:11.048 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:11.048 "is_configured": true, 00:11:11.048 "data_offset": 2048, 00:11:11.048 "data_size": 63488 00:11:11.048 }, 00:11:11.048 { 00:11:11.048 "name": "pt4", 00:11:11.048 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:11.048 "is_configured": true, 00:11:11.048 "data_offset": 2048, 00:11:11.048 "data_size": 63488 00:11:11.048 } 00:11:11.048 ] 00:11:11.048 }' 00:11:11.048 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.048 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.308 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:11.308 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:11.308 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:11.308 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:11.309 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:11.309 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:11.309 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:11.309 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:11.309 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.309 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.309 [2024-11-27 11:49:37.610723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.309 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.309 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:11.309 "name": "raid_bdev1", 00:11:11.309 "aliases": [ 00:11:11.309 "64f831a6-3bc9-421d-9f7c-310c875193f3" 00:11:11.309 ], 00:11:11.309 "product_name": "Raid Volume", 00:11:11.309 "block_size": 512, 00:11:11.309 "num_blocks": 253952, 00:11:11.309 "uuid": "64f831a6-3bc9-421d-9f7c-310c875193f3", 00:11:11.309 "assigned_rate_limits": { 00:11:11.309 "rw_ios_per_sec": 0, 00:11:11.309 "rw_mbytes_per_sec": 0, 00:11:11.309 "r_mbytes_per_sec": 0, 00:11:11.309 "w_mbytes_per_sec": 0 00:11:11.309 }, 00:11:11.309 "claimed": false, 00:11:11.309 "zoned": false, 00:11:11.309 "supported_io_types": { 00:11:11.309 "read": true, 00:11:11.309 "write": true, 00:11:11.309 "unmap": true, 00:11:11.309 "flush": true, 00:11:11.309 "reset": true, 00:11:11.309 "nvme_admin": false, 00:11:11.309 "nvme_io": false, 00:11:11.309 "nvme_io_md": false, 00:11:11.309 "write_zeroes": true, 00:11:11.309 "zcopy": false, 00:11:11.309 "get_zone_info": false, 00:11:11.309 "zone_management": false, 00:11:11.309 "zone_append": false, 00:11:11.309 "compare": false, 00:11:11.309 "compare_and_write": false, 00:11:11.309 "abort": false, 00:11:11.309 "seek_hole": false, 00:11:11.309 "seek_data": false, 00:11:11.309 "copy": false, 00:11:11.309 "nvme_iov_md": false 00:11:11.309 }, 00:11:11.309 "memory_domains": [ 00:11:11.309 { 00:11:11.309 "dma_device_id": "system", 00:11:11.309 "dma_device_type": 1 00:11:11.309 }, 00:11:11.309 { 00:11:11.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.309 "dma_device_type": 2 00:11:11.309 }, 00:11:11.309 { 00:11:11.309 "dma_device_id": "system", 00:11:11.309 "dma_device_type": 1 00:11:11.309 }, 00:11:11.309 { 00:11:11.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.309 "dma_device_type": 2 00:11:11.309 }, 00:11:11.309 { 00:11:11.309 "dma_device_id": "system", 00:11:11.309 "dma_device_type": 1 00:11:11.309 }, 00:11:11.309 { 00:11:11.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.309 "dma_device_type": 2 00:11:11.309 }, 00:11:11.309 { 00:11:11.309 "dma_device_id": "system", 00:11:11.309 "dma_device_type": 1 00:11:11.309 }, 00:11:11.309 { 00:11:11.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.309 "dma_device_type": 2 00:11:11.309 } 00:11:11.309 ], 00:11:11.309 "driver_specific": { 00:11:11.309 "raid": { 00:11:11.309 "uuid": "64f831a6-3bc9-421d-9f7c-310c875193f3", 00:11:11.309 "strip_size_kb": 64, 00:11:11.309 "state": "online", 00:11:11.309 "raid_level": "concat", 00:11:11.309 "superblock": true, 00:11:11.309 "num_base_bdevs": 4, 00:11:11.309 "num_base_bdevs_discovered": 4, 00:11:11.309 "num_base_bdevs_operational": 4, 00:11:11.309 "base_bdevs_list": [ 00:11:11.309 { 00:11:11.309 "name": "pt1", 00:11:11.309 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.309 "is_configured": true, 00:11:11.309 "data_offset": 2048, 00:11:11.309 "data_size": 63488 00:11:11.309 }, 00:11:11.309 { 00:11:11.309 "name": "pt2", 00:11:11.309 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.309 "is_configured": true, 00:11:11.309 "data_offset": 2048, 00:11:11.309 "data_size": 63488 00:11:11.309 }, 00:11:11.309 { 00:11:11.309 "name": "pt3", 00:11:11.309 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:11.309 "is_configured": true, 00:11:11.309 "data_offset": 2048, 00:11:11.309 "data_size": 63488 00:11:11.309 }, 00:11:11.309 { 00:11:11.309 "name": "pt4", 00:11:11.309 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:11.309 "is_configured": true, 00:11:11.309 "data_offset": 2048, 00:11:11.309 "data_size": 63488 00:11:11.309 } 00:11:11.309 ] 00:11:11.309 } 00:11:11.309 } 00:11:11.309 }' 00:11:11.309 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:11.569 pt2 00:11:11.569 pt3 00:11:11.569 pt4' 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.569 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.569 [2024-11-27 11:49:37.942166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.828 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.828 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 64f831a6-3bc9-421d-9f7c-310c875193f3 '!=' 64f831a6-3bc9-421d-9f7c-310c875193f3 ']' 00:11:11.828 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:11.828 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:11.828 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:11.828 11:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72629 00:11:11.828 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72629 ']' 00:11:11.828 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72629 00:11:11.828 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:11.828 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.829 11:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72629 00:11:11.829 11:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.829 killing process with pid 72629 00:11:11.829 11:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.829 11:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72629' 00:11:11.829 11:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72629 00:11:11.829 [2024-11-27 11:49:38.027845] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.829 [2024-11-27 11:49:38.027960] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.829 [2024-11-27 11:49:38.028041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.829 11:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72629 00:11:11.829 [2024-11-27 11:49:38.028052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:12.088 [2024-11-27 11:49:38.430827] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:13.470 11:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:13.470 ************************************ 00:11:13.470 END TEST raid_superblock_test 00:11:13.470 ************************************ 00:11:13.470 00:11:13.470 real 0m5.569s 00:11:13.470 user 0m7.987s 00:11:13.470 sys 0m0.936s 00:11:13.470 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.470 11:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.470 11:49:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:13.470 11:49:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:13.470 11:49:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.470 11:49:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.470 ************************************ 00:11:13.470 START TEST raid_read_error_test 00:11:13.470 ************************************ 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LE5U04soDt 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72892 00:11:13.470 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72892 00:11:13.471 11:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:13.471 11:49:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72892 ']' 00:11:13.471 11:49:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.471 11:49:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.471 11:49:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.471 11:49:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.471 11:49:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.471 [2024-11-27 11:49:39.734933] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:11:13.471 [2024-11-27 11:49:39.735155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72892 ] 00:11:13.730 [2024-11-27 11:49:39.891939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.730 [2024-11-27 11:49:40.010341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.990 [2024-11-27 11:49:40.213541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.990 [2024-11-27 11:49:40.213614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.250 BaseBdev1_malloc 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.250 true 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.250 [2024-11-27 11:49:40.624519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:14.250 [2024-11-27 11:49:40.624578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.250 [2024-11-27 11:49:40.624598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:14.250 [2024-11-27 11:49:40.624609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.250 [2024-11-27 11:49:40.626746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.250 [2024-11-27 11:49:40.626783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:14.250 BaseBdev1 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.250 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.511 BaseBdev2_malloc 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.511 true 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.511 [2024-11-27 11:49:40.690875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:14.511 [2024-11-27 11:49:40.690928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.511 [2024-11-27 11:49:40.690944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:14.511 [2024-11-27 11:49:40.690954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.511 [2024-11-27 11:49:40.693038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.511 [2024-11-27 11:49:40.693076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:14.511 BaseBdev2 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.511 BaseBdev3_malloc 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.511 true 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.511 [2024-11-27 11:49:40.770103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:14.511 [2024-11-27 11:49:40.770157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.511 [2024-11-27 11:49:40.770174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:14.511 [2024-11-27 11:49:40.770185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.511 [2024-11-27 11:49:40.772243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.511 [2024-11-27 11:49:40.772283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:14.511 BaseBdev3 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.511 BaseBdev4_malloc 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.511 true 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.511 [2024-11-27 11:49:40.837548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:14.511 [2024-11-27 11:49:40.837602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.511 [2024-11-27 11:49:40.837620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:14.511 [2024-11-27 11:49:40.837630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.511 [2024-11-27 11:49:40.839700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.511 [2024-11-27 11:49:40.839799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:14.511 BaseBdev4 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.511 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.511 [2024-11-27 11:49:40.849606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.511 [2024-11-27 11:49:40.851448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.511 [2024-11-27 11:49:40.851578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.511 [2024-11-27 11:49:40.851662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:14.511 [2024-11-27 11:49:40.851907] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:14.511 [2024-11-27 11:49:40.851959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:14.511 [2024-11-27 11:49:40.852222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:14.511 [2024-11-27 11:49:40.852419] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:14.511 [2024-11-27 11:49:40.852462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:14.512 [2024-11-27 11:49:40.852655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.512 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.512 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:14.512 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.512 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.512 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.512 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.512 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.512 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.512 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.512 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.512 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.512 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.512 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.512 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.512 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.512 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.770 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.770 "name": "raid_bdev1", 00:11:14.770 "uuid": "d3a79cd4-acf0-4798-837f-5d7e8167d80e", 00:11:14.770 "strip_size_kb": 64, 00:11:14.770 "state": "online", 00:11:14.770 "raid_level": "concat", 00:11:14.771 "superblock": true, 00:11:14.771 "num_base_bdevs": 4, 00:11:14.771 "num_base_bdevs_discovered": 4, 00:11:14.771 "num_base_bdevs_operational": 4, 00:11:14.771 "base_bdevs_list": [ 00:11:14.771 { 00:11:14.771 "name": "BaseBdev1", 00:11:14.771 "uuid": "ad9062fd-8fe8-53fd-872b-4bdc956899f3", 00:11:14.771 "is_configured": true, 00:11:14.771 "data_offset": 2048, 00:11:14.771 "data_size": 63488 00:11:14.771 }, 00:11:14.771 { 00:11:14.771 "name": "BaseBdev2", 00:11:14.771 "uuid": "17a357cc-1c68-5eb9-9331-8020ed703a2c", 00:11:14.771 "is_configured": true, 00:11:14.771 "data_offset": 2048, 00:11:14.771 "data_size": 63488 00:11:14.771 }, 00:11:14.771 { 00:11:14.771 "name": "BaseBdev3", 00:11:14.771 "uuid": "bbd319d6-241c-5701-a07b-3dfc72d5cf03", 00:11:14.771 "is_configured": true, 00:11:14.771 "data_offset": 2048, 00:11:14.771 "data_size": 63488 00:11:14.771 }, 00:11:14.771 { 00:11:14.771 "name": "BaseBdev4", 00:11:14.771 "uuid": "7efad0dc-7df0-5779-8551-826d79ba115c", 00:11:14.771 "is_configured": true, 00:11:14.771 "data_offset": 2048, 00:11:14.771 "data_size": 63488 00:11:14.771 } 00:11:14.771 ] 00:11:14.771 }' 00:11:14.771 11:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.771 11:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.030 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:15.030 11:49:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:15.030 [2024-11-27 11:49:41.397804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:15.967 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:15.967 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.967 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.967 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.967 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:15.968 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:15.968 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:15.968 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:15.968 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.968 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.968 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.968 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.968 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.968 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.968 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.968 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.968 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.968 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.968 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.968 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.968 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.230 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.230 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.230 "name": "raid_bdev1", 00:11:16.230 "uuid": "d3a79cd4-acf0-4798-837f-5d7e8167d80e", 00:11:16.230 "strip_size_kb": 64, 00:11:16.230 "state": "online", 00:11:16.230 "raid_level": "concat", 00:11:16.230 "superblock": true, 00:11:16.230 "num_base_bdevs": 4, 00:11:16.230 "num_base_bdevs_discovered": 4, 00:11:16.230 "num_base_bdevs_operational": 4, 00:11:16.230 "base_bdevs_list": [ 00:11:16.230 { 00:11:16.230 "name": "BaseBdev1", 00:11:16.230 "uuid": "ad9062fd-8fe8-53fd-872b-4bdc956899f3", 00:11:16.230 "is_configured": true, 00:11:16.230 "data_offset": 2048, 00:11:16.230 "data_size": 63488 00:11:16.230 }, 00:11:16.230 { 00:11:16.230 "name": "BaseBdev2", 00:11:16.230 "uuid": "17a357cc-1c68-5eb9-9331-8020ed703a2c", 00:11:16.230 "is_configured": true, 00:11:16.230 "data_offset": 2048, 00:11:16.230 "data_size": 63488 00:11:16.230 }, 00:11:16.230 { 00:11:16.230 "name": "BaseBdev3", 00:11:16.230 "uuid": "bbd319d6-241c-5701-a07b-3dfc72d5cf03", 00:11:16.230 "is_configured": true, 00:11:16.230 "data_offset": 2048, 00:11:16.230 "data_size": 63488 00:11:16.230 }, 00:11:16.230 { 00:11:16.230 "name": "BaseBdev4", 00:11:16.230 "uuid": "7efad0dc-7df0-5779-8551-826d79ba115c", 00:11:16.230 "is_configured": true, 00:11:16.230 "data_offset": 2048, 00:11:16.230 "data_size": 63488 00:11:16.230 } 00:11:16.230 ] 00:11:16.230 }' 00:11:16.230 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.230 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.489 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:16.489 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.489 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.489 [2024-11-27 11:49:42.761934] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:16.489 [2024-11-27 11:49:42.762059] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.489 [2024-11-27 11:49:42.765057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.489 [2024-11-27 11:49:42.765156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.489 [2024-11-27 11:49:42.765218] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.489 [2024-11-27 11:49:42.765265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:16.489 { 00:11:16.489 "results": [ 00:11:16.489 { 00:11:16.489 "job": "raid_bdev1", 00:11:16.489 "core_mask": "0x1", 00:11:16.489 "workload": "randrw", 00:11:16.489 "percentage": 50, 00:11:16.489 "status": "finished", 00:11:16.489 "queue_depth": 1, 00:11:16.489 "io_size": 131072, 00:11:16.489 "runtime": 1.365235, 00:11:16.489 "iops": 15235.472281328855, 00:11:16.489 "mibps": 1904.434035166107, 00:11:16.489 "io_failed": 1, 00:11:16.489 "io_timeout": 0, 00:11:16.489 "avg_latency_us": 90.91672272222384, 00:11:16.489 "min_latency_us": 25.9353711790393, 00:11:16.489 "max_latency_us": 1373.6803493449781 00:11:16.489 } 00:11:16.489 ], 00:11:16.489 "core_count": 1 00:11:16.489 } 00:11:16.489 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.489 11:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72892 00:11:16.489 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72892 ']' 00:11:16.489 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72892 00:11:16.489 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:16.489 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.489 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72892 00:11:16.489 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.489 killing process with pid 72892 00:11:16.489 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.489 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72892' 00:11:16.489 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72892 00:11:16.489 [2024-11-27 11:49:42.821507] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:16.489 11:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72892 00:11:17.056 [2024-11-27 11:49:43.147413] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:18.016 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LE5U04soDt 00:11:18.016 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:18.016 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:18.016 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:18.016 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:18.017 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:18.017 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:18.017 11:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:18.017 00:11:18.017 real 0m4.718s 00:11:18.017 user 0m5.575s 00:11:18.017 sys 0m0.547s 00:11:18.017 ************************************ 00:11:18.017 END TEST raid_read_error_test 00:11:18.017 ************************************ 00:11:18.017 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.017 11:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.017 11:49:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:18.017 11:49:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:18.017 11:49:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.017 11:49:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:18.276 ************************************ 00:11:18.277 START TEST raid_write_error_test 00:11:18.277 ************************************ 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nuEt7FBi1i 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73038 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73038 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73038 ']' 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.277 11:49:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.277 [2024-11-27 11:49:44.515332] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:11:18.277 [2024-11-27 11:49:44.515553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73038 ] 00:11:18.537 [2024-11-27 11:49:44.688557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.537 [2024-11-27 11:49:44.801351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.797 [2024-11-27 11:49:45.001578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.797 [2024-11-27 11:49:45.001614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.056 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.057 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:19.057 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.057 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:19.057 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.057 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.057 BaseBdev1_malloc 00:11:19.057 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.057 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:19.057 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.057 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.057 true 00:11:19.057 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.057 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:19.057 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.057 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.057 [2024-11-27 11:49:45.408995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:19.057 [2024-11-27 11:49:45.409068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.057 [2024-11-27 11:49:45.409093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:19.057 [2024-11-27 11:49:45.409104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.057 [2024-11-27 11:49:45.411272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.057 [2024-11-27 11:49:45.411403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:19.057 BaseBdev1 00:11:19.057 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.057 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.057 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:19.057 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.057 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.318 BaseBdev2_malloc 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.318 true 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.318 [2024-11-27 11:49:45.476675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:19.318 [2024-11-27 11:49:45.476735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.318 [2024-11-27 11:49:45.476771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:19.318 [2024-11-27 11:49:45.476783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.318 [2024-11-27 11:49:45.479050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.318 [2024-11-27 11:49:45.479145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:19.318 BaseBdev2 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.318 BaseBdev3_malloc 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.318 true 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.318 [2024-11-27 11:49:45.554883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:19.318 [2024-11-27 11:49:45.554976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.318 [2024-11-27 11:49:45.555015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:19.318 [2024-11-27 11:49:45.555025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.318 [2024-11-27 11:49:45.557121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.318 [2024-11-27 11:49:45.557162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:19.318 BaseBdev3 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.318 BaseBdev4_malloc 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.318 true 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.318 [2024-11-27 11:49:45.621716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:19.318 [2024-11-27 11:49:45.621776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.318 [2024-11-27 11:49:45.621796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:19.318 [2024-11-27 11:49:45.621807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.318 [2024-11-27 11:49:45.623916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.318 [2024-11-27 11:49:45.624040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:19.318 BaseBdev4 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.318 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.318 [2024-11-27 11:49:45.633775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.318 [2024-11-27 11:49:45.635570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.318 [2024-11-27 11:49:45.635646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:19.318 [2024-11-27 11:49:45.635706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:19.318 [2024-11-27 11:49:45.635945] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:19.318 [2024-11-27 11:49:45.635961] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:19.319 [2024-11-27 11:49:45.636209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:19.319 [2024-11-27 11:49:45.636377] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:19.319 [2024-11-27 11:49:45.636387] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:19.319 [2024-11-27 11:49:45.636542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.319 "name": "raid_bdev1", 00:11:19.319 "uuid": "69d846ce-0c69-4007-aae8-311a59c2b74e", 00:11:19.319 "strip_size_kb": 64, 00:11:19.319 "state": "online", 00:11:19.319 "raid_level": "concat", 00:11:19.319 "superblock": true, 00:11:19.319 "num_base_bdevs": 4, 00:11:19.319 "num_base_bdevs_discovered": 4, 00:11:19.319 "num_base_bdevs_operational": 4, 00:11:19.319 "base_bdevs_list": [ 00:11:19.319 { 00:11:19.319 "name": "BaseBdev1", 00:11:19.319 "uuid": "6d600217-50f7-5869-8c8e-e7b9e3a08205", 00:11:19.319 "is_configured": true, 00:11:19.319 "data_offset": 2048, 00:11:19.319 "data_size": 63488 00:11:19.319 }, 00:11:19.319 { 00:11:19.319 "name": "BaseBdev2", 00:11:19.319 "uuid": "4b7ca0e2-68ea-590d-8fc1-a44373904ebb", 00:11:19.319 "is_configured": true, 00:11:19.319 "data_offset": 2048, 00:11:19.319 "data_size": 63488 00:11:19.319 }, 00:11:19.319 { 00:11:19.319 "name": "BaseBdev3", 00:11:19.319 "uuid": "79845c27-4fd0-5da6-a3c5-b574adb5d556", 00:11:19.319 "is_configured": true, 00:11:19.319 "data_offset": 2048, 00:11:19.319 "data_size": 63488 00:11:19.319 }, 00:11:19.319 { 00:11:19.319 "name": "BaseBdev4", 00:11:19.319 "uuid": "ddf06f39-48df-5600-ab8d-b63f2ca25da4", 00:11:19.319 "is_configured": true, 00:11:19.319 "data_offset": 2048, 00:11:19.319 "data_size": 63488 00:11:19.319 } 00:11:19.319 ] 00:11:19.319 }' 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.319 11:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.888 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:19.888 11:49:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:19.888 [2024-11-27 11:49:46.146319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.827 "name": "raid_bdev1", 00:11:20.827 "uuid": "69d846ce-0c69-4007-aae8-311a59c2b74e", 00:11:20.827 "strip_size_kb": 64, 00:11:20.827 "state": "online", 00:11:20.827 "raid_level": "concat", 00:11:20.827 "superblock": true, 00:11:20.827 "num_base_bdevs": 4, 00:11:20.827 "num_base_bdevs_discovered": 4, 00:11:20.827 "num_base_bdevs_operational": 4, 00:11:20.827 "base_bdevs_list": [ 00:11:20.827 { 00:11:20.827 "name": "BaseBdev1", 00:11:20.827 "uuid": "6d600217-50f7-5869-8c8e-e7b9e3a08205", 00:11:20.827 "is_configured": true, 00:11:20.827 "data_offset": 2048, 00:11:20.827 "data_size": 63488 00:11:20.827 }, 00:11:20.827 { 00:11:20.827 "name": "BaseBdev2", 00:11:20.827 "uuid": "4b7ca0e2-68ea-590d-8fc1-a44373904ebb", 00:11:20.827 "is_configured": true, 00:11:20.827 "data_offset": 2048, 00:11:20.827 "data_size": 63488 00:11:20.827 }, 00:11:20.827 { 00:11:20.827 "name": "BaseBdev3", 00:11:20.827 "uuid": "79845c27-4fd0-5da6-a3c5-b574adb5d556", 00:11:20.827 "is_configured": true, 00:11:20.827 "data_offset": 2048, 00:11:20.827 "data_size": 63488 00:11:20.827 }, 00:11:20.827 { 00:11:20.827 "name": "BaseBdev4", 00:11:20.827 "uuid": "ddf06f39-48df-5600-ab8d-b63f2ca25da4", 00:11:20.827 "is_configured": true, 00:11:20.827 "data_offset": 2048, 00:11:20.827 "data_size": 63488 00:11:20.827 } 00:11:20.827 ] 00:11:20.827 }' 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.827 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.399 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:21.399 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.399 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.399 [2024-11-27 11:49:47.513274] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:21.399 [2024-11-27 11:49:47.513383] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.399 [2024-11-27 11:49:47.516155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.399 [2024-11-27 11:49:47.516253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.399 [2024-11-27 11:49:47.516329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.399 [2024-11-27 11:49:47.516387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:21.399 { 00:11:21.399 "results": [ 00:11:21.399 { 00:11:21.399 "job": "raid_bdev1", 00:11:21.399 "core_mask": "0x1", 00:11:21.399 "workload": "randrw", 00:11:21.399 "percentage": 50, 00:11:21.399 "status": "finished", 00:11:21.399 "queue_depth": 1, 00:11:21.399 "io_size": 131072, 00:11:21.399 "runtime": 1.367793, 00:11:21.399 "iops": 15164.57534144421, 00:11:21.399 "mibps": 1895.5719176805262, 00:11:21.399 "io_failed": 1, 00:11:21.399 "io_timeout": 0, 00:11:21.399 "avg_latency_us": 91.34450247539708, 00:11:21.399 "min_latency_us": 26.047161572052403, 00:11:21.399 "max_latency_us": 1423.7624454148472 00:11:21.399 } 00:11:21.399 ], 00:11:21.399 "core_count": 1 00:11:21.399 } 00:11:21.399 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.399 11:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73038 00:11:21.399 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73038 ']' 00:11:21.399 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73038 00:11:21.399 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:21.399 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.399 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73038 00:11:21.399 killing process with pid 73038 00:11:21.399 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.399 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.399 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73038' 00:11:21.399 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73038 00:11:21.399 [2024-11-27 11:49:47.559787] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.399 11:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73038 00:11:21.667 [2024-11-27 11:49:47.885469] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.061 11:49:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:23.061 11:49:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nuEt7FBi1i 00:11:23.061 11:49:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:23.061 11:49:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:23.061 11:49:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:23.061 ************************************ 00:11:23.061 END TEST raid_write_error_test 00:11:23.061 ************************************ 00:11:23.061 11:49:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:23.061 11:49:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:23.061 11:49:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:23.061 00:11:23.061 real 0m4.689s 00:11:23.061 user 0m5.505s 00:11:23.061 sys 0m0.557s 00:11:23.061 11:49:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.061 11:49:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.061 11:49:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:23.061 11:49:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:23.061 11:49:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:23.061 11:49:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.061 11:49:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.061 ************************************ 00:11:23.061 START TEST raid_state_function_test 00:11:23.061 ************************************ 00:11:23.061 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:23.061 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:23.061 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:23.061 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:23.061 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:23.061 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:23.061 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.061 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:23.061 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.061 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.061 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:23.061 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.061 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73176 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73176' 00:11:23.062 Process raid pid: 73176 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73176 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73176 ']' 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.062 11:49:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.062 [2024-11-27 11:49:49.264823] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:11:23.062 [2024-11-27 11:49:49.265076] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.321 [2024-11-27 11:49:49.461339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.321 [2024-11-27 11:49:49.577757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.580 [2024-11-27 11:49:49.782596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.580 [2024-11-27 11:49:49.782732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.839 [2024-11-27 11:49:50.100415] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:23.839 [2024-11-27 11:49:50.100474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:23.839 [2024-11-27 11:49:50.100486] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.839 [2024-11-27 11:49:50.100495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.839 [2024-11-27 11:49:50.100502] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:23.839 [2024-11-27 11:49:50.100511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:23.839 [2024-11-27 11:49:50.100523] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:23.839 [2024-11-27 11:49:50.100532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.839 "name": "Existed_Raid", 00:11:23.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.839 "strip_size_kb": 0, 00:11:23.839 "state": "configuring", 00:11:23.839 "raid_level": "raid1", 00:11:23.839 "superblock": false, 00:11:23.839 "num_base_bdevs": 4, 00:11:23.839 "num_base_bdevs_discovered": 0, 00:11:23.839 "num_base_bdevs_operational": 4, 00:11:23.839 "base_bdevs_list": [ 00:11:23.839 { 00:11:23.839 "name": "BaseBdev1", 00:11:23.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.839 "is_configured": false, 00:11:23.839 "data_offset": 0, 00:11:23.839 "data_size": 0 00:11:23.839 }, 00:11:23.839 { 00:11:23.839 "name": "BaseBdev2", 00:11:23.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.839 "is_configured": false, 00:11:23.839 "data_offset": 0, 00:11:23.839 "data_size": 0 00:11:23.839 }, 00:11:23.839 { 00:11:23.839 "name": "BaseBdev3", 00:11:23.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.839 "is_configured": false, 00:11:23.839 "data_offset": 0, 00:11:23.839 "data_size": 0 00:11:23.839 }, 00:11:23.839 { 00:11:23.839 "name": "BaseBdev4", 00:11:23.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.839 "is_configured": false, 00:11:23.839 "data_offset": 0, 00:11:23.839 "data_size": 0 00:11:23.839 } 00:11:23.839 ] 00:11:23.839 }' 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.839 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.409 [2024-11-27 11:49:50.571581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:24.409 [2024-11-27 11:49:50.571671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.409 [2024-11-27 11:49:50.579545] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:24.409 [2024-11-27 11:49:50.579627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:24.409 [2024-11-27 11:49:50.579657] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.409 [2024-11-27 11:49:50.579681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.409 [2024-11-27 11:49:50.579717] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:24.409 [2024-11-27 11:49:50.579782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:24.409 [2024-11-27 11:49:50.579803] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:24.409 [2024-11-27 11:49:50.579852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.409 [2024-11-27 11:49:50.623184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.409 BaseBdev1 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.409 [ 00:11:24.409 { 00:11:24.409 "name": "BaseBdev1", 00:11:24.409 "aliases": [ 00:11:24.409 "47b2fee2-902a-420b-ac43-f8ee3cf9ed7a" 00:11:24.409 ], 00:11:24.409 "product_name": "Malloc disk", 00:11:24.409 "block_size": 512, 00:11:24.409 "num_blocks": 65536, 00:11:24.409 "uuid": "47b2fee2-902a-420b-ac43-f8ee3cf9ed7a", 00:11:24.409 "assigned_rate_limits": { 00:11:24.409 "rw_ios_per_sec": 0, 00:11:24.409 "rw_mbytes_per_sec": 0, 00:11:24.409 "r_mbytes_per_sec": 0, 00:11:24.409 "w_mbytes_per_sec": 0 00:11:24.409 }, 00:11:24.409 "claimed": true, 00:11:24.409 "claim_type": "exclusive_write", 00:11:24.409 "zoned": false, 00:11:24.409 "supported_io_types": { 00:11:24.409 "read": true, 00:11:24.409 "write": true, 00:11:24.409 "unmap": true, 00:11:24.409 "flush": true, 00:11:24.409 "reset": true, 00:11:24.409 "nvme_admin": false, 00:11:24.409 "nvme_io": false, 00:11:24.409 "nvme_io_md": false, 00:11:24.409 "write_zeroes": true, 00:11:24.409 "zcopy": true, 00:11:24.409 "get_zone_info": false, 00:11:24.409 "zone_management": false, 00:11:24.409 "zone_append": false, 00:11:24.409 "compare": false, 00:11:24.409 "compare_and_write": false, 00:11:24.409 "abort": true, 00:11:24.409 "seek_hole": false, 00:11:24.409 "seek_data": false, 00:11:24.409 "copy": true, 00:11:24.409 "nvme_iov_md": false 00:11:24.409 }, 00:11:24.409 "memory_domains": [ 00:11:24.409 { 00:11:24.409 "dma_device_id": "system", 00:11:24.409 "dma_device_type": 1 00:11:24.409 }, 00:11:24.409 { 00:11:24.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.409 "dma_device_type": 2 00:11:24.409 } 00:11:24.409 ], 00:11:24.409 "driver_specific": {} 00:11:24.409 } 00:11:24.409 ] 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.409 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.410 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.410 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.410 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.410 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.410 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.410 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.410 "name": "Existed_Raid", 00:11:24.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.410 "strip_size_kb": 0, 00:11:24.410 "state": "configuring", 00:11:24.410 "raid_level": "raid1", 00:11:24.410 "superblock": false, 00:11:24.410 "num_base_bdevs": 4, 00:11:24.410 "num_base_bdevs_discovered": 1, 00:11:24.410 "num_base_bdevs_operational": 4, 00:11:24.410 "base_bdevs_list": [ 00:11:24.410 { 00:11:24.410 "name": "BaseBdev1", 00:11:24.410 "uuid": "47b2fee2-902a-420b-ac43-f8ee3cf9ed7a", 00:11:24.410 "is_configured": true, 00:11:24.410 "data_offset": 0, 00:11:24.410 "data_size": 65536 00:11:24.410 }, 00:11:24.410 { 00:11:24.410 "name": "BaseBdev2", 00:11:24.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.410 "is_configured": false, 00:11:24.410 "data_offset": 0, 00:11:24.410 "data_size": 0 00:11:24.410 }, 00:11:24.410 { 00:11:24.410 "name": "BaseBdev3", 00:11:24.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.410 "is_configured": false, 00:11:24.410 "data_offset": 0, 00:11:24.410 "data_size": 0 00:11:24.410 }, 00:11:24.410 { 00:11:24.410 "name": "BaseBdev4", 00:11:24.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.410 "is_configured": false, 00:11:24.410 "data_offset": 0, 00:11:24.410 "data_size": 0 00:11:24.410 } 00:11:24.410 ] 00:11:24.410 }' 00:11:24.410 11:49:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.410 11:49:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.979 [2024-11-27 11:49:51.118396] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:24.979 [2024-11-27 11:49:51.118454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.979 [2024-11-27 11:49:51.130409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.979 [2024-11-27 11:49:51.132286] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.979 [2024-11-27 11:49:51.132333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.979 [2024-11-27 11:49:51.132344] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:24.979 [2024-11-27 11:49:51.132354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:24.979 [2024-11-27 11:49:51.132361] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:24.979 [2024-11-27 11:49:51.132370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.979 "name": "Existed_Raid", 00:11:24.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.979 "strip_size_kb": 0, 00:11:24.979 "state": "configuring", 00:11:24.979 "raid_level": "raid1", 00:11:24.979 "superblock": false, 00:11:24.979 "num_base_bdevs": 4, 00:11:24.979 "num_base_bdevs_discovered": 1, 00:11:24.979 "num_base_bdevs_operational": 4, 00:11:24.979 "base_bdevs_list": [ 00:11:24.979 { 00:11:24.979 "name": "BaseBdev1", 00:11:24.979 "uuid": "47b2fee2-902a-420b-ac43-f8ee3cf9ed7a", 00:11:24.979 "is_configured": true, 00:11:24.979 "data_offset": 0, 00:11:24.979 "data_size": 65536 00:11:24.979 }, 00:11:24.979 { 00:11:24.979 "name": "BaseBdev2", 00:11:24.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.979 "is_configured": false, 00:11:24.979 "data_offset": 0, 00:11:24.979 "data_size": 0 00:11:24.979 }, 00:11:24.979 { 00:11:24.979 "name": "BaseBdev3", 00:11:24.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.979 "is_configured": false, 00:11:24.979 "data_offset": 0, 00:11:24.979 "data_size": 0 00:11:24.979 }, 00:11:24.979 { 00:11:24.979 "name": "BaseBdev4", 00:11:24.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.979 "is_configured": false, 00:11:24.979 "data_offset": 0, 00:11:24.979 "data_size": 0 00:11:24.979 } 00:11:24.979 ] 00:11:24.979 }' 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.979 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.238 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:25.238 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.238 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.497 [2024-11-27 11:49:51.645654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.497 BaseBdev2 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.497 [ 00:11:25.497 { 00:11:25.497 "name": "BaseBdev2", 00:11:25.497 "aliases": [ 00:11:25.497 "fafecfcb-081a-4ad2-86fd-1a321cb3d900" 00:11:25.497 ], 00:11:25.497 "product_name": "Malloc disk", 00:11:25.497 "block_size": 512, 00:11:25.497 "num_blocks": 65536, 00:11:25.497 "uuid": "fafecfcb-081a-4ad2-86fd-1a321cb3d900", 00:11:25.497 "assigned_rate_limits": { 00:11:25.497 "rw_ios_per_sec": 0, 00:11:25.497 "rw_mbytes_per_sec": 0, 00:11:25.497 "r_mbytes_per_sec": 0, 00:11:25.497 "w_mbytes_per_sec": 0 00:11:25.497 }, 00:11:25.497 "claimed": true, 00:11:25.497 "claim_type": "exclusive_write", 00:11:25.497 "zoned": false, 00:11:25.497 "supported_io_types": { 00:11:25.497 "read": true, 00:11:25.497 "write": true, 00:11:25.497 "unmap": true, 00:11:25.497 "flush": true, 00:11:25.497 "reset": true, 00:11:25.497 "nvme_admin": false, 00:11:25.497 "nvme_io": false, 00:11:25.497 "nvme_io_md": false, 00:11:25.497 "write_zeroes": true, 00:11:25.497 "zcopy": true, 00:11:25.497 "get_zone_info": false, 00:11:25.497 "zone_management": false, 00:11:25.497 "zone_append": false, 00:11:25.497 "compare": false, 00:11:25.497 "compare_and_write": false, 00:11:25.497 "abort": true, 00:11:25.497 "seek_hole": false, 00:11:25.497 "seek_data": false, 00:11:25.497 "copy": true, 00:11:25.497 "nvme_iov_md": false 00:11:25.497 }, 00:11:25.497 "memory_domains": [ 00:11:25.497 { 00:11:25.497 "dma_device_id": "system", 00:11:25.497 "dma_device_type": 1 00:11:25.497 }, 00:11:25.497 { 00:11:25.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.497 "dma_device_type": 2 00:11:25.497 } 00:11:25.497 ], 00:11:25.497 "driver_specific": {} 00:11:25.497 } 00:11:25.497 ] 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.497 "name": "Existed_Raid", 00:11:25.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.497 "strip_size_kb": 0, 00:11:25.497 "state": "configuring", 00:11:25.497 "raid_level": "raid1", 00:11:25.497 "superblock": false, 00:11:25.497 "num_base_bdevs": 4, 00:11:25.497 "num_base_bdevs_discovered": 2, 00:11:25.497 "num_base_bdevs_operational": 4, 00:11:25.497 "base_bdevs_list": [ 00:11:25.497 { 00:11:25.497 "name": "BaseBdev1", 00:11:25.497 "uuid": "47b2fee2-902a-420b-ac43-f8ee3cf9ed7a", 00:11:25.497 "is_configured": true, 00:11:25.497 "data_offset": 0, 00:11:25.497 "data_size": 65536 00:11:25.497 }, 00:11:25.497 { 00:11:25.497 "name": "BaseBdev2", 00:11:25.497 "uuid": "fafecfcb-081a-4ad2-86fd-1a321cb3d900", 00:11:25.497 "is_configured": true, 00:11:25.497 "data_offset": 0, 00:11:25.497 "data_size": 65536 00:11:25.497 }, 00:11:25.497 { 00:11:25.497 "name": "BaseBdev3", 00:11:25.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.497 "is_configured": false, 00:11:25.497 "data_offset": 0, 00:11:25.497 "data_size": 0 00:11:25.497 }, 00:11:25.497 { 00:11:25.497 "name": "BaseBdev4", 00:11:25.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.497 "is_configured": false, 00:11:25.497 "data_offset": 0, 00:11:25.497 "data_size": 0 00:11:25.497 } 00:11:25.497 ] 00:11:25.497 }' 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.497 11:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.766 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:25.766 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.766 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.036 [2024-11-27 11:49:52.194555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.036 BaseBdev3 00:11:26.036 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.036 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:26.036 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:26.036 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.036 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:26.036 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.036 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.036 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.036 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.036 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.036 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.036 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:26.036 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.036 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.036 [ 00:11:26.036 { 00:11:26.036 "name": "BaseBdev3", 00:11:26.036 "aliases": [ 00:11:26.036 "f5c570e2-47cf-43f5-8322-f419cee773d7" 00:11:26.036 ], 00:11:26.036 "product_name": "Malloc disk", 00:11:26.036 "block_size": 512, 00:11:26.036 "num_blocks": 65536, 00:11:26.036 "uuid": "f5c570e2-47cf-43f5-8322-f419cee773d7", 00:11:26.036 "assigned_rate_limits": { 00:11:26.036 "rw_ios_per_sec": 0, 00:11:26.036 "rw_mbytes_per_sec": 0, 00:11:26.036 "r_mbytes_per_sec": 0, 00:11:26.036 "w_mbytes_per_sec": 0 00:11:26.036 }, 00:11:26.036 "claimed": true, 00:11:26.036 "claim_type": "exclusive_write", 00:11:26.036 "zoned": false, 00:11:26.036 "supported_io_types": { 00:11:26.036 "read": true, 00:11:26.036 "write": true, 00:11:26.036 "unmap": true, 00:11:26.036 "flush": true, 00:11:26.036 "reset": true, 00:11:26.036 "nvme_admin": false, 00:11:26.036 "nvme_io": false, 00:11:26.036 "nvme_io_md": false, 00:11:26.036 "write_zeroes": true, 00:11:26.036 "zcopy": true, 00:11:26.036 "get_zone_info": false, 00:11:26.036 "zone_management": false, 00:11:26.036 "zone_append": false, 00:11:26.036 "compare": false, 00:11:26.036 "compare_and_write": false, 00:11:26.036 "abort": true, 00:11:26.036 "seek_hole": false, 00:11:26.036 "seek_data": false, 00:11:26.036 "copy": true, 00:11:26.036 "nvme_iov_md": false 00:11:26.036 }, 00:11:26.036 "memory_domains": [ 00:11:26.036 { 00:11:26.036 "dma_device_id": "system", 00:11:26.037 "dma_device_type": 1 00:11:26.037 }, 00:11:26.037 { 00:11:26.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.037 "dma_device_type": 2 00:11:26.037 } 00:11:26.037 ], 00:11:26.037 "driver_specific": {} 00:11:26.037 } 00:11:26.037 ] 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.037 "name": "Existed_Raid", 00:11:26.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.037 "strip_size_kb": 0, 00:11:26.037 "state": "configuring", 00:11:26.037 "raid_level": "raid1", 00:11:26.037 "superblock": false, 00:11:26.037 "num_base_bdevs": 4, 00:11:26.037 "num_base_bdevs_discovered": 3, 00:11:26.037 "num_base_bdevs_operational": 4, 00:11:26.037 "base_bdevs_list": [ 00:11:26.037 { 00:11:26.037 "name": "BaseBdev1", 00:11:26.037 "uuid": "47b2fee2-902a-420b-ac43-f8ee3cf9ed7a", 00:11:26.037 "is_configured": true, 00:11:26.037 "data_offset": 0, 00:11:26.037 "data_size": 65536 00:11:26.037 }, 00:11:26.037 { 00:11:26.037 "name": "BaseBdev2", 00:11:26.037 "uuid": "fafecfcb-081a-4ad2-86fd-1a321cb3d900", 00:11:26.037 "is_configured": true, 00:11:26.037 "data_offset": 0, 00:11:26.037 "data_size": 65536 00:11:26.037 }, 00:11:26.037 { 00:11:26.037 "name": "BaseBdev3", 00:11:26.037 "uuid": "f5c570e2-47cf-43f5-8322-f419cee773d7", 00:11:26.037 "is_configured": true, 00:11:26.037 "data_offset": 0, 00:11:26.037 "data_size": 65536 00:11:26.037 }, 00:11:26.037 { 00:11:26.037 "name": "BaseBdev4", 00:11:26.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.037 "is_configured": false, 00:11:26.037 "data_offset": 0, 00:11:26.037 "data_size": 0 00:11:26.037 } 00:11:26.037 ] 00:11:26.037 }' 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.037 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.607 [2024-11-27 11:49:52.750739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:26.607 [2024-11-27 11:49:52.750865] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:26.607 [2024-11-27 11:49:52.750892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:26.607 [2024-11-27 11:49:52.751215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:26.607 [2024-11-27 11:49:52.751446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:26.607 [2024-11-27 11:49:52.751498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:26.607 [2024-11-27 11:49:52.751818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.607 BaseBdev4 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.607 [ 00:11:26.607 { 00:11:26.607 "name": "BaseBdev4", 00:11:26.607 "aliases": [ 00:11:26.607 "40de0fd7-2bb0-4fd0-bf0f-6295ab96ccf2" 00:11:26.607 ], 00:11:26.607 "product_name": "Malloc disk", 00:11:26.607 "block_size": 512, 00:11:26.607 "num_blocks": 65536, 00:11:26.607 "uuid": "40de0fd7-2bb0-4fd0-bf0f-6295ab96ccf2", 00:11:26.607 "assigned_rate_limits": { 00:11:26.607 "rw_ios_per_sec": 0, 00:11:26.607 "rw_mbytes_per_sec": 0, 00:11:26.607 "r_mbytes_per_sec": 0, 00:11:26.607 "w_mbytes_per_sec": 0 00:11:26.607 }, 00:11:26.607 "claimed": true, 00:11:26.607 "claim_type": "exclusive_write", 00:11:26.607 "zoned": false, 00:11:26.607 "supported_io_types": { 00:11:26.607 "read": true, 00:11:26.607 "write": true, 00:11:26.607 "unmap": true, 00:11:26.607 "flush": true, 00:11:26.607 "reset": true, 00:11:26.607 "nvme_admin": false, 00:11:26.607 "nvme_io": false, 00:11:26.607 "nvme_io_md": false, 00:11:26.607 "write_zeroes": true, 00:11:26.607 "zcopy": true, 00:11:26.607 "get_zone_info": false, 00:11:26.607 "zone_management": false, 00:11:26.607 "zone_append": false, 00:11:26.607 "compare": false, 00:11:26.607 "compare_and_write": false, 00:11:26.607 "abort": true, 00:11:26.607 "seek_hole": false, 00:11:26.607 "seek_data": false, 00:11:26.607 "copy": true, 00:11:26.607 "nvme_iov_md": false 00:11:26.607 }, 00:11:26.607 "memory_domains": [ 00:11:26.607 { 00:11:26.607 "dma_device_id": "system", 00:11:26.607 "dma_device_type": 1 00:11:26.607 }, 00:11:26.607 { 00:11:26.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.607 "dma_device_type": 2 00:11:26.607 } 00:11:26.607 ], 00:11:26.607 "driver_specific": {} 00:11:26.607 } 00:11:26.607 ] 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.607 "name": "Existed_Raid", 00:11:26.607 "uuid": "cba950c2-a750-40f4-94e8-eeb17f309098", 00:11:26.607 "strip_size_kb": 0, 00:11:26.607 "state": "online", 00:11:26.607 "raid_level": "raid1", 00:11:26.607 "superblock": false, 00:11:26.607 "num_base_bdevs": 4, 00:11:26.607 "num_base_bdevs_discovered": 4, 00:11:26.607 "num_base_bdevs_operational": 4, 00:11:26.607 "base_bdevs_list": [ 00:11:26.607 { 00:11:26.607 "name": "BaseBdev1", 00:11:26.607 "uuid": "47b2fee2-902a-420b-ac43-f8ee3cf9ed7a", 00:11:26.607 "is_configured": true, 00:11:26.607 "data_offset": 0, 00:11:26.607 "data_size": 65536 00:11:26.607 }, 00:11:26.607 { 00:11:26.607 "name": "BaseBdev2", 00:11:26.607 "uuid": "fafecfcb-081a-4ad2-86fd-1a321cb3d900", 00:11:26.607 "is_configured": true, 00:11:26.607 "data_offset": 0, 00:11:26.607 "data_size": 65536 00:11:26.607 }, 00:11:26.607 { 00:11:26.607 "name": "BaseBdev3", 00:11:26.607 "uuid": "f5c570e2-47cf-43f5-8322-f419cee773d7", 00:11:26.607 "is_configured": true, 00:11:26.607 "data_offset": 0, 00:11:26.607 "data_size": 65536 00:11:26.607 }, 00:11:26.607 { 00:11:26.607 "name": "BaseBdev4", 00:11:26.607 "uuid": "40de0fd7-2bb0-4fd0-bf0f-6295ab96ccf2", 00:11:26.607 "is_configured": true, 00:11:26.607 "data_offset": 0, 00:11:26.607 "data_size": 65536 00:11:26.607 } 00:11:26.607 ] 00:11:26.607 }' 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.607 11:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.868 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:26.868 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:26.868 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.868 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.868 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.868 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.868 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:26.868 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.868 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.868 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.868 [2024-11-27 11:49:53.214345] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.868 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.127 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:27.127 "name": "Existed_Raid", 00:11:27.127 "aliases": [ 00:11:27.127 "cba950c2-a750-40f4-94e8-eeb17f309098" 00:11:27.127 ], 00:11:27.127 "product_name": "Raid Volume", 00:11:27.127 "block_size": 512, 00:11:27.127 "num_blocks": 65536, 00:11:27.127 "uuid": "cba950c2-a750-40f4-94e8-eeb17f309098", 00:11:27.127 "assigned_rate_limits": { 00:11:27.127 "rw_ios_per_sec": 0, 00:11:27.127 "rw_mbytes_per_sec": 0, 00:11:27.127 "r_mbytes_per_sec": 0, 00:11:27.127 "w_mbytes_per_sec": 0 00:11:27.127 }, 00:11:27.127 "claimed": false, 00:11:27.127 "zoned": false, 00:11:27.127 "supported_io_types": { 00:11:27.127 "read": true, 00:11:27.127 "write": true, 00:11:27.127 "unmap": false, 00:11:27.127 "flush": false, 00:11:27.127 "reset": true, 00:11:27.127 "nvme_admin": false, 00:11:27.127 "nvme_io": false, 00:11:27.127 "nvme_io_md": false, 00:11:27.127 "write_zeroes": true, 00:11:27.127 "zcopy": false, 00:11:27.127 "get_zone_info": false, 00:11:27.127 "zone_management": false, 00:11:27.127 "zone_append": false, 00:11:27.127 "compare": false, 00:11:27.127 "compare_and_write": false, 00:11:27.127 "abort": false, 00:11:27.127 "seek_hole": false, 00:11:27.127 "seek_data": false, 00:11:27.127 "copy": false, 00:11:27.127 "nvme_iov_md": false 00:11:27.127 }, 00:11:27.127 "memory_domains": [ 00:11:27.127 { 00:11:27.127 "dma_device_id": "system", 00:11:27.127 "dma_device_type": 1 00:11:27.127 }, 00:11:27.127 { 00:11:27.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.127 "dma_device_type": 2 00:11:27.127 }, 00:11:27.127 { 00:11:27.127 "dma_device_id": "system", 00:11:27.127 "dma_device_type": 1 00:11:27.127 }, 00:11:27.127 { 00:11:27.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.127 "dma_device_type": 2 00:11:27.127 }, 00:11:27.127 { 00:11:27.127 "dma_device_id": "system", 00:11:27.127 "dma_device_type": 1 00:11:27.127 }, 00:11:27.127 { 00:11:27.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.127 "dma_device_type": 2 00:11:27.127 }, 00:11:27.127 { 00:11:27.127 "dma_device_id": "system", 00:11:27.127 "dma_device_type": 1 00:11:27.127 }, 00:11:27.127 { 00:11:27.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.127 "dma_device_type": 2 00:11:27.127 } 00:11:27.127 ], 00:11:27.127 "driver_specific": { 00:11:27.127 "raid": { 00:11:27.127 "uuid": "cba950c2-a750-40f4-94e8-eeb17f309098", 00:11:27.127 "strip_size_kb": 0, 00:11:27.127 "state": "online", 00:11:27.127 "raid_level": "raid1", 00:11:27.127 "superblock": false, 00:11:27.127 "num_base_bdevs": 4, 00:11:27.127 "num_base_bdevs_discovered": 4, 00:11:27.127 "num_base_bdevs_operational": 4, 00:11:27.127 "base_bdevs_list": [ 00:11:27.128 { 00:11:27.128 "name": "BaseBdev1", 00:11:27.128 "uuid": "47b2fee2-902a-420b-ac43-f8ee3cf9ed7a", 00:11:27.128 "is_configured": true, 00:11:27.128 "data_offset": 0, 00:11:27.128 "data_size": 65536 00:11:27.128 }, 00:11:27.128 { 00:11:27.128 "name": "BaseBdev2", 00:11:27.128 "uuid": "fafecfcb-081a-4ad2-86fd-1a321cb3d900", 00:11:27.128 "is_configured": true, 00:11:27.128 "data_offset": 0, 00:11:27.128 "data_size": 65536 00:11:27.128 }, 00:11:27.128 { 00:11:27.128 "name": "BaseBdev3", 00:11:27.128 "uuid": "f5c570e2-47cf-43f5-8322-f419cee773d7", 00:11:27.128 "is_configured": true, 00:11:27.128 "data_offset": 0, 00:11:27.128 "data_size": 65536 00:11:27.128 }, 00:11:27.128 { 00:11:27.128 "name": "BaseBdev4", 00:11:27.128 "uuid": "40de0fd7-2bb0-4fd0-bf0f-6295ab96ccf2", 00:11:27.128 "is_configured": true, 00:11:27.128 "data_offset": 0, 00:11:27.128 "data_size": 65536 00:11:27.128 } 00:11:27.128 ] 00:11:27.128 } 00:11:27.128 } 00:11:27.128 }' 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:27.128 BaseBdev2 00:11:27.128 BaseBdev3 00:11:27.128 BaseBdev4' 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.128 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.388 [2024-11-27 11:49:53.537562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.388 "name": "Existed_Raid", 00:11:27.388 "uuid": "cba950c2-a750-40f4-94e8-eeb17f309098", 00:11:27.388 "strip_size_kb": 0, 00:11:27.388 "state": "online", 00:11:27.388 "raid_level": "raid1", 00:11:27.388 "superblock": false, 00:11:27.388 "num_base_bdevs": 4, 00:11:27.388 "num_base_bdevs_discovered": 3, 00:11:27.388 "num_base_bdevs_operational": 3, 00:11:27.388 "base_bdevs_list": [ 00:11:27.388 { 00:11:27.388 "name": null, 00:11:27.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.388 "is_configured": false, 00:11:27.388 "data_offset": 0, 00:11:27.388 "data_size": 65536 00:11:27.388 }, 00:11:27.388 { 00:11:27.388 "name": "BaseBdev2", 00:11:27.388 "uuid": "fafecfcb-081a-4ad2-86fd-1a321cb3d900", 00:11:27.388 "is_configured": true, 00:11:27.388 "data_offset": 0, 00:11:27.388 "data_size": 65536 00:11:27.388 }, 00:11:27.388 { 00:11:27.388 "name": "BaseBdev3", 00:11:27.388 "uuid": "f5c570e2-47cf-43f5-8322-f419cee773d7", 00:11:27.388 "is_configured": true, 00:11:27.388 "data_offset": 0, 00:11:27.388 "data_size": 65536 00:11:27.388 }, 00:11:27.388 { 00:11:27.388 "name": "BaseBdev4", 00:11:27.388 "uuid": "40de0fd7-2bb0-4fd0-bf0f-6295ab96ccf2", 00:11:27.388 "is_configured": true, 00:11:27.388 "data_offset": 0, 00:11:27.388 "data_size": 65536 00:11:27.388 } 00:11:27.388 ] 00:11:27.388 }' 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.388 11:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.958 [2024-11-27 11:49:54.143941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.958 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.958 [2024-11-27 11:49:54.296506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.245 [2024-11-27 11:49:54.449374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:28.245 [2024-11-27 11:49:54.449478] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.245 [2024-11-27 11:49:54.548228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.245 [2024-11-27 11:49:54.548378] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.245 [2024-11-27 11:49:54.548422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.245 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.505 BaseBdev2 00:11:28.505 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.505 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:28.505 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:28.505 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.505 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:28.505 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.505 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.505 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.505 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.505 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.505 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.505 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:28.505 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.505 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.505 [ 00:11:28.505 { 00:11:28.505 "name": "BaseBdev2", 00:11:28.505 "aliases": [ 00:11:28.505 "455ae10b-84dc-42b6-bd1b-161a61c23a79" 00:11:28.505 ], 00:11:28.505 "product_name": "Malloc disk", 00:11:28.505 "block_size": 512, 00:11:28.505 "num_blocks": 65536, 00:11:28.505 "uuid": "455ae10b-84dc-42b6-bd1b-161a61c23a79", 00:11:28.505 "assigned_rate_limits": { 00:11:28.505 "rw_ios_per_sec": 0, 00:11:28.505 "rw_mbytes_per_sec": 0, 00:11:28.505 "r_mbytes_per_sec": 0, 00:11:28.506 "w_mbytes_per_sec": 0 00:11:28.506 }, 00:11:28.506 "claimed": false, 00:11:28.506 "zoned": false, 00:11:28.506 "supported_io_types": { 00:11:28.506 "read": true, 00:11:28.506 "write": true, 00:11:28.506 "unmap": true, 00:11:28.506 "flush": true, 00:11:28.506 "reset": true, 00:11:28.506 "nvme_admin": false, 00:11:28.506 "nvme_io": false, 00:11:28.506 "nvme_io_md": false, 00:11:28.506 "write_zeroes": true, 00:11:28.506 "zcopy": true, 00:11:28.506 "get_zone_info": false, 00:11:28.506 "zone_management": false, 00:11:28.506 "zone_append": false, 00:11:28.506 "compare": false, 00:11:28.506 "compare_and_write": false, 00:11:28.506 "abort": true, 00:11:28.506 "seek_hole": false, 00:11:28.506 "seek_data": false, 00:11:28.506 "copy": true, 00:11:28.506 "nvme_iov_md": false 00:11:28.506 }, 00:11:28.506 "memory_domains": [ 00:11:28.506 { 00:11:28.506 "dma_device_id": "system", 00:11:28.506 "dma_device_type": 1 00:11:28.506 }, 00:11:28.506 { 00:11:28.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.506 "dma_device_type": 2 00:11:28.506 } 00:11:28.506 ], 00:11:28.506 "driver_specific": {} 00:11:28.506 } 00:11:28.506 ] 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.506 BaseBdev3 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.506 [ 00:11:28.506 { 00:11:28.506 "name": "BaseBdev3", 00:11:28.506 "aliases": [ 00:11:28.506 "87658455-5d63-4323-ac71-38eaf19cb161" 00:11:28.506 ], 00:11:28.506 "product_name": "Malloc disk", 00:11:28.506 "block_size": 512, 00:11:28.506 "num_blocks": 65536, 00:11:28.506 "uuid": "87658455-5d63-4323-ac71-38eaf19cb161", 00:11:28.506 "assigned_rate_limits": { 00:11:28.506 "rw_ios_per_sec": 0, 00:11:28.506 "rw_mbytes_per_sec": 0, 00:11:28.506 "r_mbytes_per_sec": 0, 00:11:28.506 "w_mbytes_per_sec": 0 00:11:28.506 }, 00:11:28.506 "claimed": false, 00:11:28.506 "zoned": false, 00:11:28.506 "supported_io_types": { 00:11:28.506 "read": true, 00:11:28.506 "write": true, 00:11:28.506 "unmap": true, 00:11:28.506 "flush": true, 00:11:28.506 "reset": true, 00:11:28.506 "nvme_admin": false, 00:11:28.506 "nvme_io": false, 00:11:28.506 "nvme_io_md": false, 00:11:28.506 "write_zeroes": true, 00:11:28.506 "zcopy": true, 00:11:28.506 "get_zone_info": false, 00:11:28.506 "zone_management": false, 00:11:28.506 "zone_append": false, 00:11:28.506 "compare": false, 00:11:28.506 "compare_and_write": false, 00:11:28.506 "abort": true, 00:11:28.506 "seek_hole": false, 00:11:28.506 "seek_data": false, 00:11:28.506 "copy": true, 00:11:28.506 "nvme_iov_md": false 00:11:28.506 }, 00:11:28.506 "memory_domains": [ 00:11:28.506 { 00:11:28.506 "dma_device_id": "system", 00:11:28.506 "dma_device_type": 1 00:11:28.506 }, 00:11:28.506 { 00:11:28.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.506 "dma_device_type": 2 00:11:28.506 } 00:11:28.506 ], 00:11:28.506 "driver_specific": {} 00:11:28.506 } 00:11:28.506 ] 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.506 BaseBdev4 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.506 [ 00:11:28.506 { 00:11:28.506 "name": "BaseBdev4", 00:11:28.506 "aliases": [ 00:11:28.506 "e70c01e4-edd6-4718-81c1-b0f78d78dec5" 00:11:28.506 ], 00:11:28.506 "product_name": "Malloc disk", 00:11:28.506 "block_size": 512, 00:11:28.506 "num_blocks": 65536, 00:11:28.506 "uuid": "e70c01e4-edd6-4718-81c1-b0f78d78dec5", 00:11:28.506 "assigned_rate_limits": { 00:11:28.506 "rw_ios_per_sec": 0, 00:11:28.506 "rw_mbytes_per_sec": 0, 00:11:28.506 "r_mbytes_per_sec": 0, 00:11:28.506 "w_mbytes_per_sec": 0 00:11:28.506 }, 00:11:28.506 "claimed": false, 00:11:28.506 "zoned": false, 00:11:28.506 "supported_io_types": { 00:11:28.506 "read": true, 00:11:28.506 "write": true, 00:11:28.506 "unmap": true, 00:11:28.506 "flush": true, 00:11:28.506 "reset": true, 00:11:28.506 "nvme_admin": false, 00:11:28.506 "nvme_io": false, 00:11:28.506 "nvme_io_md": false, 00:11:28.506 "write_zeroes": true, 00:11:28.506 "zcopy": true, 00:11:28.506 "get_zone_info": false, 00:11:28.506 "zone_management": false, 00:11:28.506 "zone_append": false, 00:11:28.506 "compare": false, 00:11:28.506 "compare_and_write": false, 00:11:28.506 "abort": true, 00:11:28.506 "seek_hole": false, 00:11:28.506 "seek_data": false, 00:11:28.506 "copy": true, 00:11:28.506 "nvme_iov_md": false 00:11:28.506 }, 00:11:28.506 "memory_domains": [ 00:11:28.506 { 00:11:28.506 "dma_device_id": "system", 00:11:28.506 "dma_device_type": 1 00:11:28.506 }, 00:11:28.506 { 00:11:28.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.506 "dma_device_type": 2 00:11:28.506 } 00:11:28.506 ], 00:11:28.506 "driver_specific": {} 00:11:28.506 } 00:11:28.506 ] 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.506 [2024-11-27 11:49:54.856033] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.506 [2024-11-27 11:49:54.856166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.506 [2024-11-27 11:49:54.856222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.506 [2024-11-27 11:49:54.858197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.506 [2024-11-27 11:49:54.858302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:28.506 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.507 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:28.507 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.507 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.507 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.507 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.507 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.507 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.507 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.507 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.507 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.507 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.507 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.507 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.507 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.767 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.767 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.767 "name": "Existed_Raid", 00:11:28.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.767 "strip_size_kb": 0, 00:11:28.767 "state": "configuring", 00:11:28.767 "raid_level": "raid1", 00:11:28.767 "superblock": false, 00:11:28.767 "num_base_bdevs": 4, 00:11:28.767 "num_base_bdevs_discovered": 3, 00:11:28.767 "num_base_bdevs_operational": 4, 00:11:28.767 "base_bdevs_list": [ 00:11:28.767 { 00:11:28.767 "name": "BaseBdev1", 00:11:28.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.767 "is_configured": false, 00:11:28.767 "data_offset": 0, 00:11:28.767 "data_size": 0 00:11:28.767 }, 00:11:28.767 { 00:11:28.767 "name": "BaseBdev2", 00:11:28.767 "uuid": "455ae10b-84dc-42b6-bd1b-161a61c23a79", 00:11:28.767 "is_configured": true, 00:11:28.767 "data_offset": 0, 00:11:28.767 "data_size": 65536 00:11:28.767 }, 00:11:28.767 { 00:11:28.767 "name": "BaseBdev3", 00:11:28.767 "uuid": "87658455-5d63-4323-ac71-38eaf19cb161", 00:11:28.767 "is_configured": true, 00:11:28.767 "data_offset": 0, 00:11:28.767 "data_size": 65536 00:11:28.767 }, 00:11:28.767 { 00:11:28.767 "name": "BaseBdev4", 00:11:28.767 "uuid": "e70c01e4-edd6-4718-81c1-b0f78d78dec5", 00:11:28.767 "is_configured": true, 00:11:28.767 "data_offset": 0, 00:11:28.767 "data_size": 65536 00:11:28.767 } 00:11:28.767 ] 00:11:28.767 }' 00:11:28.767 11:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.767 11:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.026 [2024-11-27 11:49:55.339256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.026 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.026 "name": "Existed_Raid", 00:11:29.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.026 "strip_size_kb": 0, 00:11:29.026 "state": "configuring", 00:11:29.026 "raid_level": "raid1", 00:11:29.026 "superblock": false, 00:11:29.027 "num_base_bdevs": 4, 00:11:29.027 "num_base_bdevs_discovered": 2, 00:11:29.027 "num_base_bdevs_operational": 4, 00:11:29.027 "base_bdevs_list": [ 00:11:29.027 { 00:11:29.027 "name": "BaseBdev1", 00:11:29.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.027 "is_configured": false, 00:11:29.027 "data_offset": 0, 00:11:29.027 "data_size": 0 00:11:29.027 }, 00:11:29.027 { 00:11:29.027 "name": null, 00:11:29.027 "uuid": "455ae10b-84dc-42b6-bd1b-161a61c23a79", 00:11:29.027 "is_configured": false, 00:11:29.027 "data_offset": 0, 00:11:29.027 "data_size": 65536 00:11:29.027 }, 00:11:29.027 { 00:11:29.027 "name": "BaseBdev3", 00:11:29.027 "uuid": "87658455-5d63-4323-ac71-38eaf19cb161", 00:11:29.027 "is_configured": true, 00:11:29.027 "data_offset": 0, 00:11:29.027 "data_size": 65536 00:11:29.027 }, 00:11:29.027 { 00:11:29.027 "name": "BaseBdev4", 00:11:29.027 "uuid": "e70c01e4-edd6-4718-81c1-b0f78d78dec5", 00:11:29.027 "is_configured": true, 00:11:29.027 "data_offset": 0, 00:11:29.027 "data_size": 65536 00:11:29.027 } 00:11:29.027 ] 00:11:29.027 }' 00:11:29.027 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.027 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.596 [2024-11-27 11:49:55.912504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.596 BaseBdev1 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.596 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.596 [ 00:11:29.596 { 00:11:29.596 "name": "BaseBdev1", 00:11:29.596 "aliases": [ 00:11:29.596 "e815f103-75e6-4432-8b13-9a4c45d5873d" 00:11:29.596 ], 00:11:29.596 "product_name": "Malloc disk", 00:11:29.596 "block_size": 512, 00:11:29.597 "num_blocks": 65536, 00:11:29.597 "uuid": "e815f103-75e6-4432-8b13-9a4c45d5873d", 00:11:29.597 "assigned_rate_limits": { 00:11:29.597 "rw_ios_per_sec": 0, 00:11:29.597 "rw_mbytes_per_sec": 0, 00:11:29.597 "r_mbytes_per_sec": 0, 00:11:29.597 "w_mbytes_per_sec": 0 00:11:29.597 }, 00:11:29.597 "claimed": true, 00:11:29.597 "claim_type": "exclusive_write", 00:11:29.597 "zoned": false, 00:11:29.597 "supported_io_types": { 00:11:29.597 "read": true, 00:11:29.597 "write": true, 00:11:29.597 "unmap": true, 00:11:29.597 "flush": true, 00:11:29.597 "reset": true, 00:11:29.597 "nvme_admin": false, 00:11:29.597 "nvme_io": false, 00:11:29.597 "nvme_io_md": false, 00:11:29.597 "write_zeroes": true, 00:11:29.597 "zcopy": true, 00:11:29.597 "get_zone_info": false, 00:11:29.597 "zone_management": false, 00:11:29.597 "zone_append": false, 00:11:29.597 "compare": false, 00:11:29.597 "compare_and_write": false, 00:11:29.597 "abort": true, 00:11:29.597 "seek_hole": false, 00:11:29.597 "seek_data": false, 00:11:29.597 "copy": true, 00:11:29.597 "nvme_iov_md": false 00:11:29.597 }, 00:11:29.597 "memory_domains": [ 00:11:29.597 { 00:11:29.597 "dma_device_id": "system", 00:11:29.597 "dma_device_type": 1 00:11:29.597 }, 00:11:29.597 { 00:11:29.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.597 "dma_device_type": 2 00:11:29.597 } 00:11:29.597 ], 00:11:29.597 "driver_specific": {} 00:11:29.597 } 00:11:29.597 ] 00:11:29.597 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.597 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:29.597 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:29.597 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.597 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.597 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.597 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.597 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.597 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.597 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.597 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.597 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.597 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.597 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.597 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.597 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.597 11:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.857 11:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.857 "name": "Existed_Raid", 00:11:29.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.857 "strip_size_kb": 0, 00:11:29.857 "state": "configuring", 00:11:29.857 "raid_level": "raid1", 00:11:29.857 "superblock": false, 00:11:29.857 "num_base_bdevs": 4, 00:11:29.857 "num_base_bdevs_discovered": 3, 00:11:29.857 "num_base_bdevs_operational": 4, 00:11:29.857 "base_bdevs_list": [ 00:11:29.857 { 00:11:29.857 "name": "BaseBdev1", 00:11:29.857 "uuid": "e815f103-75e6-4432-8b13-9a4c45d5873d", 00:11:29.857 "is_configured": true, 00:11:29.857 "data_offset": 0, 00:11:29.857 "data_size": 65536 00:11:29.857 }, 00:11:29.857 { 00:11:29.857 "name": null, 00:11:29.857 "uuid": "455ae10b-84dc-42b6-bd1b-161a61c23a79", 00:11:29.857 "is_configured": false, 00:11:29.857 "data_offset": 0, 00:11:29.857 "data_size": 65536 00:11:29.857 }, 00:11:29.857 { 00:11:29.857 "name": "BaseBdev3", 00:11:29.857 "uuid": "87658455-5d63-4323-ac71-38eaf19cb161", 00:11:29.857 "is_configured": true, 00:11:29.857 "data_offset": 0, 00:11:29.857 "data_size": 65536 00:11:29.857 }, 00:11:29.857 { 00:11:29.857 "name": "BaseBdev4", 00:11:29.857 "uuid": "e70c01e4-edd6-4718-81c1-b0f78d78dec5", 00:11:29.857 "is_configured": true, 00:11:29.857 "data_offset": 0, 00:11:29.857 "data_size": 65536 00:11:29.857 } 00:11:29.857 ] 00:11:29.857 }' 00:11:29.857 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.857 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.117 [2024-11-27 11:49:56.467667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.117 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.376 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.376 "name": "Existed_Raid", 00:11:30.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.376 "strip_size_kb": 0, 00:11:30.376 "state": "configuring", 00:11:30.376 "raid_level": "raid1", 00:11:30.376 "superblock": false, 00:11:30.376 "num_base_bdevs": 4, 00:11:30.376 "num_base_bdevs_discovered": 2, 00:11:30.376 "num_base_bdevs_operational": 4, 00:11:30.376 "base_bdevs_list": [ 00:11:30.376 { 00:11:30.376 "name": "BaseBdev1", 00:11:30.377 "uuid": "e815f103-75e6-4432-8b13-9a4c45d5873d", 00:11:30.377 "is_configured": true, 00:11:30.377 "data_offset": 0, 00:11:30.377 "data_size": 65536 00:11:30.377 }, 00:11:30.377 { 00:11:30.377 "name": null, 00:11:30.377 "uuid": "455ae10b-84dc-42b6-bd1b-161a61c23a79", 00:11:30.377 "is_configured": false, 00:11:30.377 "data_offset": 0, 00:11:30.377 "data_size": 65536 00:11:30.377 }, 00:11:30.377 { 00:11:30.377 "name": null, 00:11:30.377 "uuid": "87658455-5d63-4323-ac71-38eaf19cb161", 00:11:30.377 "is_configured": false, 00:11:30.377 "data_offset": 0, 00:11:30.377 "data_size": 65536 00:11:30.377 }, 00:11:30.377 { 00:11:30.377 "name": "BaseBdev4", 00:11:30.377 "uuid": "e70c01e4-edd6-4718-81c1-b0f78d78dec5", 00:11:30.377 "is_configured": true, 00:11:30.377 "data_offset": 0, 00:11:30.377 "data_size": 65536 00:11:30.377 } 00:11:30.377 ] 00:11:30.377 }' 00:11:30.377 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.377 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.635 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.635 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.635 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.635 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:30.635 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.635 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.636 [2024-11-27 11:49:56.930891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.636 "name": "Existed_Raid", 00:11:30.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.636 "strip_size_kb": 0, 00:11:30.636 "state": "configuring", 00:11:30.636 "raid_level": "raid1", 00:11:30.636 "superblock": false, 00:11:30.636 "num_base_bdevs": 4, 00:11:30.636 "num_base_bdevs_discovered": 3, 00:11:30.636 "num_base_bdevs_operational": 4, 00:11:30.636 "base_bdevs_list": [ 00:11:30.636 { 00:11:30.636 "name": "BaseBdev1", 00:11:30.636 "uuid": "e815f103-75e6-4432-8b13-9a4c45d5873d", 00:11:30.636 "is_configured": true, 00:11:30.636 "data_offset": 0, 00:11:30.636 "data_size": 65536 00:11:30.636 }, 00:11:30.636 { 00:11:30.636 "name": null, 00:11:30.636 "uuid": "455ae10b-84dc-42b6-bd1b-161a61c23a79", 00:11:30.636 "is_configured": false, 00:11:30.636 "data_offset": 0, 00:11:30.636 "data_size": 65536 00:11:30.636 }, 00:11:30.636 { 00:11:30.636 "name": "BaseBdev3", 00:11:30.636 "uuid": "87658455-5d63-4323-ac71-38eaf19cb161", 00:11:30.636 "is_configured": true, 00:11:30.636 "data_offset": 0, 00:11:30.636 "data_size": 65536 00:11:30.636 }, 00:11:30.636 { 00:11:30.636 "name": "BaseBdev4", 00:11:30.636 "uuid": "e70c01e4-edd6-4718-81c1-b0f78d78dec5", 00:11:30.636 "is_configured": true, 00:11:30.636 "data_offset": 0, 00:11:30.636 "data_size": 65536 00:11:30.636 } 00:11:30.636 ] 00:11:30.636 }' 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.636 11:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.203 [2024-11-27 11:49:57.402118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.203 "name": "Existed_Raid", 00:11:31.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.203 "strip_size_kb": 0, 00:11:31.203 "state": "configuring", 00:11:31.203 "raid_level": "raid1", 00:11:31.203 "superblock": false, 00:11:31.203 "num_base_bdevs": 4, 00:11:31.203 "num_base_bdevs_discovered": 2, 00:11:31.203 "num_base_bdevs_operational": 4, 00:11:31.203 "base_bdevs_list": [ 00:11:31.203 { 00:11:31.203 "name": null, 00:11:31.203 "uuid": "e815f103-75e6-4432-8b13-9a4c45d5873d", 00:11:31.203 "is_configured": false, 00:11:31.203 "data_offset": 0, 00:11:31.203 "data_size": 65536 00:11:31.203 }, 00:11:31.203 { 00:11:31.203 "name": null, 00:11:31.203 "uuid": "455ae10b-84dc-42b6-bd1b-161a61c23a79", 00:11:31.203 "is_configured": false, 00:11:31.203 "data_offset": 0, 00:11:31.203 "data_size": 65536 00:11:31.203 }, 00:11:31.203 { 00:11:31.203 "name": "BaseBdev3", 00:11:31.203 "uuid": "87658455-5d63-4323-ac71-38eaf19cb161", 00:11:31.203 "is_configured": true, 00:11:31.203 "data_offset": 0, 00:11:31.203 "data_size": 65536 00:11:31.203 }, 00:11:31.203 { 00:11:31.203 "name": "BaseBdev4", 00:11:31.203 "uuid": "e70c01e4-edd6-4718-81c1-b0f78d78dec5", 00:11:31.203 "is_configured": true, 00:11:31.203 "data_offset": 0, 00:11:31.203 "data_size": 65536 00:11:31.203 } 00:11:31.203 ] 00:11:31.203 }' 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.203 11:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.772 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:31.772 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.772 11:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.772 11:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.772 11:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.772 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:31.772 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:31.772 11:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.772 11:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.772 [2024-11-27 11:49:57.994617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.772 11:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.772 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.772 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.772 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.772 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.772 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.772 11:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.772 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.772 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.772 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.772 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.772 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.772 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.772 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.772 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.772 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.772 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.772 "name": "Existed_Raid", 00:11:31.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.772 "strip_size_kb": 0, 00:11:31.772 "state": "configuring", 00:11:31.772 "raid_level": "raid1", 00:11:31.772 "superblock": false, 00:11:31.772 "num_base_bdevs": 4, 00:11:31.772 "num_base_bdevs_discovered": 3, 00:11:31.772 "num_base_bdevs_operational": 4, 00:11:31.772 "base_bdevs_list": [ 00:11:31.772 { 00:11:31.772 "name": null, 00:11:31.772 "uuid": "e815f103-75e6-4432-8b13-9a4c45d5873d", 00:11:31.772 "is_configured": false, 00:11:31.773 "data_offset": 0, 00:11:31.773 "data_size": 65536 00:11:31.773 }, 00:11:31.773 { 00:11:31.773 "name": "BaseBdev2", 00:11:31.773 "uuid": "455ae10b-84dc-42b6-bd1b-161a61c23a79", 00:11:31.773 "is_configured": true, 00:11:31.773 "data_offset": 0, 00:11:31.773 "data_size": 65536 00:11:31.773 }, 00:11:31.773 { 00:11:31.773 "name": "BaseBdev3", 00:11:31.773 "uuid": "87658455-5d63-4323-ac71-38eaf19cb161", 00:11:31.773 "is_configured": true, 00:11:31.773 "data_offset": 0, 00:11:31.773 "data_size": 65536 00:11:31.773 }, 00:11:31.773 { 00:11:31.773 "name": "BaseBdev4", 00:11:31.773 "uuid": "e70c01e4-edd6-4718-81c1-b0f78d78dec5", 00:11:31.773 "is_configured": true, 00:11:31.773 "data_offset": 0, 00:11:31.773 "data_size": 65536 00:11:31.773 } 00:11:31.773 ] 00:11:31.773 }' 00:11:31.773 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.773 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e815f103-75e6-4432-8b13-9a4c45d5873d 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.343 [2024-11-27 11:49:58.627931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:32.343 [2024-11-27 11:49:58.627981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:32.343 [2024-11-27 11:49:58.627990] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:32.343 [2024-11-27 11:49:58.628271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:32.343 [2024-11-27 11:49:58.628435] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:32.343 [2024-11-27 11:49:58.628445] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:32.343 [2024-11-27 11:49:58.628689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.343 NewBaseBdev 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.343 [ 00:11:32.343 { 00:11:32.343 "name": "NewBaseBdev", 00:11:32.343 "aliases": [ 00:11:32.343 "e815f103-75e6-4432-8b13-9a4c45d5873d" 00:11:32.343 ], 00:11:32.343 "product_name": "Malloc disk", 00:11:32.343 "block_size": 512, 00:11:32.343 "num_blocks": 65536, 00:11:32.343 "uuid": "e815f103-75e6-4432-8b13-9a4c45d5873d", 00:11:32.343 "assigned_rate_limits": { 00:11:32.343 "rw_ios_per_sec": 0, 00:11:32.343 "rw_mbytes_per_sec": 0, 00:11:32.343 "r_mbytes_per_sec": 0, 00:11:32.343 "w_mbytes_per_sec": 0 00:11:32.343 }, 00:11:32.343 "claimed": true, 00:11:32.343 "claim_type": "exclusive_write", 00:11:32.343 "zoned": false, 00:11:32.343 "supported_io_types": { 00:11:32.343 "read": true, 00:11:32.343 "write": true, 00:11:32.343 "unmap": true, 00:11:32.343 "flush": true, 00:11:32.343 "reset": true, 00:11:32.343 "nvme_admin": false, 00:11:32.343 "nvme_io": false, 00:11:32.343 "nvme_io_md": false, 00:11:32.343 "write_zeroes": true, 00:11:32.343 "zcopy": true, 00:11:32.343 "get_zone_info": false, 00:11:32.343 "zone_management": false, 00:11:32.343 "zone_append": false, 00:11:32.343 "compare": false, 00:11:32.343 "compare_and_write": false, 00:11:32.343 "abort": true, 00:11:32.343 "seek_hole": false, 00:11:32.343 "seek_data": false, 00:11:32.343 "copy": true, 00:11:32.343 "nvme_iov_md": false 00:11:32.343 }, 00:11:32.343 "memory_domains": [ 00:11:32.343 { 00:11:32.343 "dma_device_id": "system", 00:11:32.343 "dma_device_type": 1 00:11:32.343 }, 00:11:32.343 { 00:11:32.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.343 "dma_device_type": 2 00:11:32.343 } 00:11:32.343 ], 00:11:32.343 "driver_specific": {} 00:11:32.343 } 00:11:32.343 ] 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.343 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.343 "name": "Existed_Raid", 00:11:32.343 "uuid": "b0dfa98b-2ab2-4190-aedf-7f665e170fe8", 00:11:32.343 "strip_size_kb": 0, 00:11:32.343 "state": "online", 00:11:32.343 "raid_level": "raid1", 00:11:32.343 "superblock": false, 00:11:32.343 "num_base_bdevs": 4, 00:11:32.343 "num_base_bdevs_discovered": 4, 00:11:32.343 "num_base_bdevs_operational": 4, 00:11:32.343 "base_bdevs_list": [ 00:11:32.343 { 00:11:32.343 "name": "NewBaseBdev", 00:11:32.344 "uuid": "e815f103-75e6-4432-8b13-9a4c45d5873d", 00:11:32.344 "is_configured": true, 00:11:32.344 "data_offset": 0, 00:11:32.344 "data_size": 65536 00:11:32.344 }, 00:11:32.344 { 00:11:32.344 "name": "BaseBdev2", 00:11:32.344 "uuid": "455ae10b-84dc-42b6-bd1b-161a61c23a79", 00:11:32.344 "is_configured": true, 00:11:32.344 "data_offset": 0, 00:11:32.344 "data_size": 65536 00:11:32.344 }, 00:11:32.344 { 00:11:32.344 "name": "BaseBdev3", 00:11:32.344 "uuid": "87658455-5d63-4323-ac71-38eaf19cb161", 00:11:32.344 "is_configured": true, 00:11:32.344 "data_offset": 0, 00:11:32.344 "data_size": 65536 00:11:32.344 }, 00:11:32.344 { 00:11:32.344 "name": "BaseBdev4", 00:11:32.344 "uuid": "e70c01e4-edd6-4718-81c1-b0f78d78dec5", 00:11:32.344 "is_configured": true, 00:11:32.344 "data_offset": 0, 00:11:32.344 "data_size": 65536 00:11:32.344 } 00:11:32.344 ] 00:11:32.344 }' 00:11:32.344 11:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.344 11:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.911 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:32.911 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:32.911 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:32.911 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:32.911 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:32.911 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:32.911 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:32.911 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.911 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.911 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:32.911 [2024-11-27 11:49:59.151639] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.911 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.911 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:32.911 "name": "Existed_Raid", 00:11:32.911 "aliases": [ 00:11:32.911 "b0dfa98b-2ab2-4190-aedf-7f665e170fe8" 00:11:32.911 ], 00:11:32.911 "product_name": "Raid Volume", 00:11:32.911 "block_size": 512, 00:11:32.911 "num_blocks": 65536, 00:11:32.912 "uuid": "b0dfa98b-2ab2-4190-aedf-7f665e170fe8", 00:11:32.912 "assigned_rate_limits": { 00:11:32.912 "rw_ios_per_sec": 0, 00:11:32.912 "rw_mbytes_per_sec": 0, 00:11:32.912 "r_mbytes_per_sec": 0, 00:11:32.912 "w_mbytes_per_sec": 0 00:11:32.912 }, 00:11:32.912 "claimed": false, 00:11:32.912 "zoned": false, 00:11:32.912 "supported_io_types": { 00:11:32.912 "read": true, 00:11:32.912 "write": true, 00:11:32.912 "unmap": false, 00:11:32.912 "flush": false, 00:11:32.912 "reset": true, 00:11:32.912 "nvme_admin": false, 00:11:32.912 "nvme_io": false, 00:11:32.912 "nvme_io_md": false, 00:11:32.912 "write_zeroes": true, 00:11:32.912 "zcopy": false, 00:11:32.912 "get_zone_info": false, 00:11:32.912 "zone_management": false, 00:11:32.912 "zone_append": false, 00:11:32.912 "compare": false, 00:11:32.912 "compare_and_write": false, 00:11:32.912 "abort": false, 00:11:32.912 "seek_hole": false, 00:11:32.912 "seek_data": false, 00:11:32.912 "copy": false, 00:11:32.912 "nvme_iov_md": false 00:11:32.912 }, 00:11:32.912 "memory_domains": [ 00:11:32.912 { 00:11:32.912 "dma_device_id": "system", 00:11:32.912 "dma_device_type": 1 00:11:32.912 }, 00:11:32.912 { 00:11:32.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.912 "dma_device_type": 2 00:11:32.912 }, 00:11:32.912 { 00:11:32.912 "dma_device_id": "system", 00:11:32.912 "dma_device_type": 1 00:11:32.912 }, 00:11:32.912 { 00:11:32.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.912 "dma_device_type": 2 00:11:32.912 }, 00:11:32.912 { 00:11:32.912 "dma_device_id": "system", 00:11:32.912 "dma_device_type": 1 00:11:32.912 }, 00:11:32.912 { 00:11:32.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.912 "dma_device_type": 2 00:11:32.912 }, 00:11:32.912 { 00:11:32.912 "dma_device_id": "system", 00:11:32.912 "dma_device_type": 1 00:11:32.912 }, 00:11:32.912 { 00:11:32.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.912 "dma_device_type": 2 00:11:32.912 } 00:11:32.912 ], 00:11:32.912 "driver_specific": { 00:11:32.912 "raid": { 00:11:32.912 "uuid": "b0dfa98b-2ab2-4190-aedf-7f665e170fe8", 00:11:32.912 "strip_size_kb": 0, 00:11:32.912 "state": "online", 00:11:32.912 "raid_level": "raid1", 00:11:32.912 "superblock": false, 00:11:32.912 "num_base_bdevs": 4, 00:11:32.912 "num_base_bdevs_discovered": 4, 00:11:32.912 "num_base_bdevs_operational": 4, 00:11:32.912 "base_bdevs_list": [ 00:11:32.912 { 00:11:32.912 "name": "NewBaseBdev", 00:11:32.912 "uuid": "e815f103-75e6-4432-8b13-9a4c45d5873d", 00:11:32.912 "is_configured": true, 00:11:32.912 "data_offset": 0, 00:11:32.912 "data_size": 65536 00:11:32.912 }, 00:11:32.912 { 00:11:32.912 "name": "BaseBdev2", 00:11:32.912 "uuid": "455ae10b-84dc-42b6-bd1b-161a61c23a79", 00:11:32.912 "is_configured": true, 00:11:32.912 "data_offset": 0, 00:11:32.912 "data_size": 65536 00:11:32.912 }, 00:11:32.912 { 00:11:32.912 "name": "BaseBdev3", 00:11:32.912 "uuid": "87658455-5d63-4323-ac71-38eaf19cb161", 00:11:32.912 "is_configured": true, 00:11:32.912 "data_offset": 0, 00:11:32.912 "data_size": 65536 00:11:32.912 }, 00:11:32.912 { 00:11:32.912 "name": "BaseBdev4", 00:11:32.912 "uuid": "e70c01e4-edd6-4718-81c1-b0f78d78dec5", 00:11:32.912 "is_configured": true, 00:11:32.912 "data_offset": 0, 00:11:32.912 "data_size": 65536 00:11:32.912 } 00:11:32.912 ] 00:11:32.912 } 00:11:32.912 } 00:11:32.912 }' 00:11:32.912 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:32.912 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:32.912 BaseBdev2 00:11:32.912 BaseBdev3 00:11:32.912 BaseBdev4' 00:11:32.912 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.912 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:32.912 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.172 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.173 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.173 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.173 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.173 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.173 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.173 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.173 [2024-11-27 11:49:59.498593] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.173 [2024-11-27 11:49:59.498623] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.173 [2024-11-27 11:49:59.498713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.173 [2024-11-27 11:49:59.499030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.173 [2024-11-27 11:49:59.499045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:33.173 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.173 11:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73176 00:11:33.173 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73176 ']' 00:11:33.173 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73176 00:11:33.173 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:33.173 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.173 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73176 00:11:33.173 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.173 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.173 killing process with pid 73176 00:11:33.173 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73176' 00:11:33.173 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73176 00:11:33.173 [2024-11-27 11:49:59.538523] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.173 11:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73176 00:11:33.742 [2024-11-27 11:49:59.939189] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:35.124 00:11:35.124 real 0m11.912s 00:11:35.124 user 0m19.042s 00:11:35.124 sys 0m2.079s 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.124 ************************************ 00:11:35.124 END TEST raid_state_function_test 00:11:35.124 ************************************ 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.124 11:50:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:35.124 11:50:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:35.124 11:50:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.124 11:50:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.124 ************************************ 00:11:35.124 START TEST raid_state_function_test_sb 00:11:35.124 ************************************ 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73853 00:11:35.124 Process raid pid: 73853 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73853' 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73853 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73853 ']' 00:11:35.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.124 11:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.124 [2024-11-27 11:50:01.251535] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:11:35.124 [2024-11-27 11:50:01.251670] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.124 [2024-11-27 11:50:01.427644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.384 [2024-11-27 11:50:01.543267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.384 [2024-11-27 11:50:01.747406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.384 [2024-11-27 11:50:01.747475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 [2024-11-27 11:50:02.088462] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.953 [2024-11-27 11:50:02.088595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.953 [2024-11-27 11:50:02.088628] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.953 [2024-11-27 11:50:02.088641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.953 [2024-11-27 11:50:02.088649] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.953 [2024-11-27 11:50:02.088660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.953 [2024-11-27 11:50:02.088667] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:35.953 [2024-11-27 11:50:02.088678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.953 "name": "Existed_Raid", 00:11:35.953 "uuid": "ce2f260e-51c2-490d-99a7-8525e7ba38dd", 00:11:35.953 "strip_size_kb": 0, 00:11:35.953 "state": "configuring", 00:11:35.953 "raid_level": "raid1", 00:11:35.953 "superblock": true, 00:11:35.953 "num_base_bdevs": 4, 00:11:35.953 "num_base_bdevs_discovered": 0, 00:11:35.953 "num_base_bdevs_operational": 4, 00:11:35.953 "base_bdevs_list": [ 00:11:35.953 { 00:11:35.953 "name": "BaseBdev1", 00:11:35.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.953 "is_configured": false, 00:11:35.953 "data_offset": 0, 00:11:35.953 "data_size": 0 00:11:35.953 }, 00:11:35.953 { 00:11:35.953 "name": "BaseBdev2", 00:11:35.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.953 "is_configured": false, 00:11:35.953 "data_offset": 0, 00:11:35.953 "data_size": 0 00:11:35.953 }, 00:11:35.953 { 00:11:35.953 "name": "BaseBdev3", 00:11:35.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.953 "is_configured": false, 00:11:35.953 "data_offset": 0, 00:11:35.953 "data_size": 0 00:11:35.953 }, 00:11:35.953 { 00:11:35.953 "name": "BaseBdev4", 00:11:35.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.953 "is_configured": false, 00:11:35.953 "data_offset": 0, 00:11:35.953 "data_size": 0 00:11:35.953 } 00:11:35.953 ] 00:11:35.953 }' 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.953 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.213 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.213 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.213 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.213 [2024-11-27 11:50:02.539624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.213 [2024-11-27 11:50:02.539726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:36.213 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.213 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:36.213 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.213 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.213 [2024-11-27 11:50:02.551611] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:36.213 [2024-11-27 11:50:02.551694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:36.214 [2024-11-27 11:50:02.551722] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.214 [2024-11-27 11:50:02.551746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.214 [2024-11-27 11:50:02.551765] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.214 [2024-11-27 11:50:02.551786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.214 [2024-11-27 11:50:02.551804] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:36.214 [2024-11-27 11:50:02.551825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:36.214 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.214 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:36.214 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.214 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.474 [2024-11-27 11:50:02.599946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.474 BaseBdev1 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.474 [ 00:11:36.474 { 00:11:36.474 "name": "BaseBdev1", 00:11:36.474 "aliases": [ 00:11:36.474 "945668e2-a13a-4f59-9926-60b189772783" 00:11:36.474 ], 00:11:36.474 "product_name": "Malloc disk", 00:11:36.474 "block_size": 512, 00:11:36.474 "num_blocks": 65536, 00:11:36.474 "uuid": "945668e2-a13a-4f59-9926-60b189772783", 00:11:36.474 "assigned_rate_limits": { 00:11:36.474 "rw_ios_per_sec": 0, 00:11:36.474 "rw_mbytes_per_sec": 0, 00:11:36.474 "r_mbytes_per_sec": 0, 00:11:36.474 "w_mbytes_per_sec": 0 00:11:36.474 }, 00:11:36.474 "claimed": true, 00:11:36.474 "claim_type": "exclusive_write", 00:11:36.474 "zoned": false, 00:11:36.474 "supported_io_types": { 00:11:36.474 "read": true, 00:11:36.474 "write": true, 00:11:36.474 "unmap": true, 00:11:36.474 "flush": true, 00:11:36.474 "reset": true, 00:11:36.474 "nvme_admin": false, 00:11:36.474 "nvme_io": false, 00:11:36.474 "nvme_io_md": false, 00:11:36.474 "write_zeroes": true, 00:11:36.474 "zcopy": true, 00:11:36.474 "get_zone_info": false, 00:11:36.474 "zone_management": false, 00:11:36.474 "zone_append": false, 00:11:36.474 "compare": false, 00:11:36.474 "compare_and_write": false, 00:11:36.474 "abort": true, 00:11:36.474 "seek_hole": false, 00:11:36.474 "seek_data": false, 00:11:36.474 "copy": true, 00:11:36.474 "nvme_iov_md": false 00:11:36.474 }, 00:11:36.474 "memory_domains": [ 00:11:36.474 { 00:11:36.474 "dma_device_id": "system", 00:11:36.474 "dma_device_type": 1 00:11:36.474 }, 00:11:36.474 { 00:11:36.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.474 "dma_device_type": 2 00:11:36.474 } 00:11:36.474 ], 00:11:36.474 "driver_specific": {} 00:11:36.474 } 00:11:36.474 ] 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.474 "name": "Existed_Raid", 00:11:36.474 "uuid": "21928fa1-f2ca-49ee-af6d-efcd2a9070de", 00:11:36.474 "strip_size_kb": 0, 00:11:36.474 "state": "configuring", 00:11:36.474 "raid_level": "raid1", 00:11:36.474 "superblock": true, 00:11:36.474 "num_base_bdevs": 4, 00:11:36.474 "num_base_bdevs_discovered": 1, 00:11:36.474 "num_base_bdevs_operational": 4, 00:11:36.474 "base_bdevs_list": [ 00:11:36.474 { 00:11:36.474 "name": "BaseBdev1", 00:11:36.474 "uuid": "945668e2-a13a-4f59-9926-60b189772783", 00:11:36.474 "is_configured": true, 00:11:36.474 "data_offset": 2048, 00:11:36.474 "data_size": 63488 00:11:36.474 }, 00:11:36.474 { 00:11:36.474 "name": "BaseBdev2", 00:11:36.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.474 "is_configured": false, 00:11:36.474 "data_offset": 0, 00:11:36.474 "data_size": 0 00:11:36.474 }, 00:11:36.474 { 00:11:36.474 "name": "BaseBdev3", 00:11:36.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.474 "is_configured": false, 00:11:36.474 "data_offset": 0, 00:11:36.474 "data_size": 0 00:11:36.474 }, 00:11:36.474 { 00:11:36.474 "name": "BaseBdev4", 00:11:36.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.474 "is_configured": false, 00:11:36.474 "data_offset": 0, 00:11:36.474 "data_size": 0 00:11:36.474 } 00:11:36.474 ] 00:11:36.474 }' 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.474 11:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.734 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.734 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.734 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.734 [2024-11-27 11:50:03.079138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.734 [2024-11-27 11:50:03.079261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:36.734 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.734 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:36.734 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.734 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.734 [2024-11-27 11:50:03.091151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.734 [2024-11-27 11:50:03.093021] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.734 [2024-11-27 11:50:03.093094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.734 [2024-11-27 11:50:03.093139] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.734 [2024-11-27 11:50:03.093163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.734 [2024-11-27 11:50:03.093182] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:36.734 [2024-11-27 11:50:03.093204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:36.734 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.734 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:36.734 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.734 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:36.734 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.734 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.734 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.734 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.734 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.735 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.735 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.735 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.735 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.735 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.735 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.735 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.735 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.994 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.995 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.995 "name": "Existed_Raid", 00:11:36.995 "uuid": "35f59ddd-4f95-45e3-8545-2c32ff9065be", 00:11:36.995 "strip_size_kb": 0, 00:11:36.995 "state": "configuring", 00:11:36.995 "raid_level": "raid1", 00:11:36.995 "superblock": true, 00:11:36.995 "num_base_bdevs": 4, 00:11:36.995 "num_base_bdevs_discovered": 1, 00:11:36.995 "num_base_bdevs_operational": 4, 00:11:36.995 "base_bdevs_list": [ 00:11:36.995 { 00:11:36.995 "name": "BaseBdev1", 00:11:36.995 "uuid": "945668e2-a13a-4f59-9926-60b189772783", 00:11:36.995 "is_configured": true, 00:11:36.995 "data_offset": 2048, 00:11:36.995 "data_size": 63488 00:11:36.995 }, 00:11:36.995 { 00:11:36.995 "name": "BaseBdev2", 00:11:36.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.995 "is_configured": false, 00:11:36.995 "data_offset": 0, 00:11:36.995 "data_size": 0 00:11:36.995 }, 00:11:36.995 { 00:11:36.995 "name": "BaseBdev3", 00:11:36.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.995 "is_configured": false, 00:11:36.995 "data_offset": 0, 00:11:36.995 "data_size": 0 00:11:36.995 }, 00:11:36.995 { 00:11:36.995 "name": "BaseBdev4", 00:11:36.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.995 "is_configured": false, 00:11:36.995 "data_offset": 0, 00:11:36.995 "data_size": 0 00:11:36.995 } 00:11:36.995 ] 00:11:36.995 }' 00:11:36.995 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.995 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.255 [2024-11-27 11:50:03.560333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.255 BaseBdev2 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.255 [ 00:11:37.255 { 00:11:37.255 "name": "BaseBdev2", 00:11:37.255 "aliases": [ 00:11:37.255 "ca25cba2-c76f-4850-8916-9040fecda9cc" 00:11:37.255 ], 00:11:37.255 "product_name": "Malloc disk", 00:11:37.255 "block_size": 512, 00:11:37.255 "num_blocks": 65536, 00:11:37.255 "uuid": "ca25cba2-c76f-4850-8916-9040fecda9cc", 00:11:37.255 "assigned_rate_limits": { 00:11:37.255 "rw_ios_per_sec": 0, 00:11:37.255 "rw_mbytes_per_sec": 0, 00:11:37.255 "r_mbytes_per_sec": 0, 00:11:37.255 "w_mbytes_per_sec": 0 00:11:37.255 }, 00:11:37.255 "claimed": true, 00:11:37.255 "claim_type": "exclusive_write", 00:11:37.255 "zoned": false, 00:11:37.255 "supported_io_types": { 00:11:37.255 "read": true, 00:11:37.255 "write": true, 00:11:37.255 "unmap": true, 00:11:37.255 "flush": true, 00:11:37.255 "reset": true, 00:11:37.255 "nvme_admin": false, 00:11:37.255 "nvme_io": false, 00:11:37.255 "nvme_io_md": false, 00:11:37.255 "write_zeroes": true, 00:11:37.255 "zcopy": true, 00:11:37.255 "get_zone_info": false, 00:11:37.255 "zone_management": false, 00:11:37.255 "zone_append": false, 00:11:37.255 "compare": false, 00:11:37.255 "compare_and_write": false, 00:11:37.255 "abort": true, 00:11:37.255 "seek_hole": false, 00:11:37.255 "seek_data": false, 00:11:37.255 "copy": true, 00:11:37.255 "nvme_iov_md": false 00:11:37.255 }, 00:11:37.255 "memory_domains": [ 00:11:37.255 { 00:11:37.255 "dma_device_id": "system", 00:11:37.255 "dma_device_type": 1 00:11:37.255 }, 00:11:37.255 { 00:11:37.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.255 "dma_device_type": 2 00:11:37.255 } 00:11:37.255 ], 00:11:37.255 "driver_specific": {} 00:11:37.255 } 00:11:37.255 ] 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.255 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.515 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.515 "name": "Existed_Raid", 00:11:37.515 "uuid": "35f59ddd-4f95-45e3-8545-2c32ff9065be", 00:11:37.515 "strip_size_kb": 0, 00:11:37.515 "state": "configuring", 00:11:37.515 "raid_level": "raid1", 00:11:37.515 "superblock": true, 00:11:37.515 "num_base_bdevs": 4, 00:11:37.515 "num_base_bdevs_discovered": 2, 00:11:37.515 "num_base_bdevs_operational": 4, 00:11:37.515 "base_bdevs_list": [ 00:11:37.515 { 00:11:37.515 "name": "BaseBdev1", 00:11:37.515 "uuid": "945668e2-a13a-4f59-9926-60b189772783", 00:11:37.515 "is_configured": true, 00:11:37.515 "data_offset": 2048, 00:11:37.515 "data_size": 63488 00:11:37.515 }, 00:11:37.515 { 00:11:37.515 "name": "BaseBdev2", 00:11:37.515 "uuid": "ca25cba2-c76f-4850-8916-9040fecda9cc", 00:11:37.515 "is_configured": true, 00:11:37.515 "data_offset": 2048, 00:11:37.515 "data_size": 63488 00:11:37.515 }, 00:11:37.515 { 00:11:37.515 "name": "BaseBdev3", 00:11:37.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.515 "is_configured": false, 00:11:37.515 "data_offset": 0, 00:11:37.515 "data_size": 0 00:11:37.515 }, 00:11:37.515 { 00:11:37.515 "name": "BaseBdev4", 00:11:37.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.515 "is_configured": false, 00:11:37.515 "data_offset": 0, 00:11:37.515 "data_size": 0 00:11:37.515 } 00:11:37.515 ] 00:11:37.515 }' 00:11:37.515 11:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.515 11:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.775 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:37.775 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.775 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.775 [2024-11-27 11:50:04.128526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.775 BaseBdev3 00:11:37.775 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.775 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:37.775 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:37.775 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.775 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.775 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.775 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.775 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.775 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.775 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.775 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.775 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.775 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.775 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.775 [ 00:11:37.775 { 00:11:37.775 "name": "BaseBdev3", 00:11:37.775 "aliases": [ 00:11:37.775 "f7964452-fd4c-42d3-b2c9-78a52f489196" 00:11:37.775 ], 00:11:37.775 "product_name": "Malloc disk", 00:11:37.775 "block_size": 512, 00:11:37.775 "num_blocks": 65536, 00:11:37.775 "uuid": "f7964452-fd4c-42d3-b2c9-78a52f489196", 00:11:37.775 "assigned_rate_limits": { 00:11:37.775 "rw_ios_per_sec": 0, 00:11:37.775 "rw_mbytes_per_sec": 0, 00:11:37.775 "r_mbytes_per_sec": 0, 00:11:37.775 "w_mbytes_per_sec": 0 00:11:38.035 }, 00:11:38.035 "claimed": true, 00:11:38.035 "claim_type": "exclusive_write", 00:11:38.035 "zoned": false, 00:11:38.035 "supported_io_types": { 00:11:38.035 "read": true, 00:11:38.035 "write": true, 00:11:38.035 "unmap": true, 00:11:38.035 "flush": true, 00:11:38.035 "reset": true, 00:11:38.035 "nvme_admin": false, 00:11:38.035 "nvme_io": false, 00:11:38.035 "nvme_io_md": false, 00:11:38.035 "write_zeroes": true, 00:11:38.035 "zcopy": true, 00:11:38.035 "get_zone_info": false, 00:11:38.035 "zone_management": false, 00:11:38.035 "zone_append": false, 00:11:38.035 "compare": false, 00:11:38.035 "compare_and_write": false, 00:11:38.035 "abort": true, 00:11:38.035 "seek_hole": false, 00:11:38.035 "seek_data": false, 00:11:38.035 "copy": true, 00:11:38.035 "nvme_iov_md": false 00:11:38.035 }, 00:11:38.035 "memory_domains": [ 00:11:38.035 { 00:11:38.035 "dma_device_id": "system", 00:11:38.035 "dma_device_type": 1 00:11:38.035 }, 00:11:38.035 { 00:11:38.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.035 "dma_device_type": 2 00:11:38.035 } 00:11:38.035 ], 00:11:38.035 "driver_specific": {} 00:11:38.035 } 00:11:38.035 ] 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.035 "name": "Existed_Raid", 00:11:38.035 "uuid": "35f59ddd-4f95-45e3-8545-2c32ff9065be", 00:11:38.035 "strip_size_kb": 0, 00:11:38.035 "state": "configuring", 00:11:38.035 "raid_level": "raid1", 00:11:38.035 "superblock": true, 00:11:38.035 "num_base_bdevs": 4, 00:11:38.035 "num_base_bdevs_discovered": 3, 00:11:38.035 "num_base_bdevs_operational": 4, 00:11:38.035 "base_bdevs_list": [ 00:11:38.035 { 00:11:38.035 "name": "BaseBdev1", 00:11:38.035 "uuid": "945668e2-a13a-4f59-9926-60b189772783", 00:11:38.035 "is_configured": true, 00:11:38.035 "data_offset": 2048, 00:11:38.035 "data_size": 63488 00:11:38.035 }, 00:11:38.035 { 00:11:38.035 "name": "BaseBdev2", 00:11:38.035 "uuid": "ca25cba2-c76f-4850-8916-9040fecda9cc", 00:11:38.035 "is_configured": true, 00:11:38.035 "data_offset": 2048, 00:11:38.035 "data_size": 63488 00:11:38.035 }, 00:11:38.035 { 00:11:38.035 "name": "BaseBdev3", 00:11:38.035 "uuid": "f7964452-fd4c-42d3-b2c9-78a52f489196", 00:11:38.035 "is_configured": true, 00:11:38.035 "data_offset": 2048, 00:11:38.035 "data_size": 63488 00:11:38.035 }, 00:11:38.035 { 00:11:38.035 "name": "BaseBdev4", 00:11:38.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.035 "is_configured": false, 00:11:38.035 "data_offset": 0, 00:11:38.035 "data_size": 0 00:11:38.035 } 00:11:38.035 ] 00:11:38.035 }' 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.035 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.295 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:38.295 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.295 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.295 [2024-11-27 11:50:04.662183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:38.295 [2024-11-27 11:50:04.662556] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:38.295 [2024-11-27 11:50:04.662610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:38.295 [2024-11-27 11:50:04.662920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:38.295 [2024-11-27 11:50:04.663134] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:38.295 [2024-11-27 11:50:04.663182] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:38.295 BaseBdev4 00:11:38.295 [2024-11-27 11:50:04.663374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.295 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.295 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:38.295 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:38.295 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.295 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:38.295 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.295 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.295 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.295 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.295 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.555 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.556 [ 00:11:38.556 { 00:11:38.556 "name": "BaseBdev4", 00:11:38.556 "aliases": [ 00:11:38.556 "8871fdd5-d6a7-46ba-b2f0-ffc9e1d9b3ee" 00:11:38.556 ], 00:11:38.556 "product_name": "Malloc disk", 00:11:38.556 "block_size": 512, 00:11:38.556 "num_blocks": 65536, 00:11:38.556 "uuid": "8871fdd5-d6a7-46ba-b2f0-ffc9e1d9b3ee", 00:11:38.556 "assigned_rate_limits": { 00:11:38.556 "rw_ios_per_sec": 0, 00:11:38.556 "rw_mbytes_per_sec": 0, 00:11:38.556 "r_mbytes_per_sec": 0, 00:11:38.556 "w_mbytes_per_sec": 0 00:11:38.556 }, 00:11:38.556 "claimed": true, 00:11:38.556 "claim_type": "exclusive_write", 00:11:38.556 "zoned": false, 00:11:38.556 "supported_io_types": { 00:11:38.556 "read": true, 00:11:38.556 "write": true, 00:11:38.556 "unmap": true, 00:11:38.556 "flush": true, 00:11:38.556 "reset": true, 00:11:38.556 "nvme_admin": false, 00:11:38.556 "nvme_io": false, 00:11:38.556 "nvme_io_md": false, 00:11:38.556 "write_zeroes": true, 00:11:38.556 "zcopy": true, 00:11:38.556 "get_zone_info": false, 00:11:38.556 "zone_management": false, 00:11:38.556 "zone_append": false, 00:11:38.556 "compare": false, 00:11:38.556 "compare_and_write": false, 00:11:38.556 "abort": true, 00:11:38.556 "seek_hole": false, 00:11:38.556 "seek_data": false, 00:11:38.556 "copy": true, 00:11:38.556 "nvme_iov_md": false 00:11:38.556 }, 00:11:38.556 "memory_domains": [ 00:11:38.556 { 00:11:38.556 "dma_device_id": "system", 00:11:38.556 "dma_device_type": 1 00:11:38.556 }, 00:11:38.556 { 00:11:38.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.556 "dma_device_type": 2 00:11:38.556 } 00:11:38.556 ], 00:11:38.556 "driver_specific": {} 00:11:38.556 } 00:11:38.556 ] 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.556 "name": "Existed_Raid", 00:11:38.556 "uuid": "35f59ddd-4f95-45e3-8545-2c32ff9065be", 00:11:38.556 "strip_size_kb": 0, 00:11:38.556 "state": "online", 00:11:38.556 "raid_level": "raid1", 00:11:38.556 "superblock": true, 00:11:38.556 "num_base_bdevs": 4, 00:11:38.556 "num_base_bdevs_discovered": 4, 00:11:38.556 "num_base_bdevs_operational": 4, 00:11:38.556 "base_bdevs_list": [ 00:11:38.556 { 00:11:38.556 "name": "BaseBdev1", 00:11:38.556 "uuid": "945668e2-a13a-4f59-9926-60b189772783", 00:11:38.556 "is_configured": true, 00:11:38.556 "data_offset": 2048, 00:11:38.556 "data_size": 63488 00:11:38.556 }, 00:11:38.556 { 00:11:38.556 "name": "BaseBdev2", 00:11:38.556 "uuid": "ca25cba2-c76f-4850-8916-9040fecda9cc", 00:11:38.556 "is_configured": true, 00:11:38.556 "data_offset": 2048, 00:11:38.556 "data_size": 63488 00:11:38.556 }, 00:11:38.556 { 00:11:38.556 "name": "BaseBdev3", 00:11:38.556 "uuid": "f7964452-fd4c-42d3-b2c9-78a52f489196", 00:11:38.556 "is_configured": true, 00:11:38.556 "data_offset": 2048, 00:11:38.556 "data_size": 63488 00:11:38.556 }, 00:11:38.556 { 00:11:38.556 "name": "BaseBdev4", 00:11:38.556 "uuid": "8871fdd5-d6a7-46ba-b2f0-ffc9e1d9b3ee", 00:11:38.556 "is_configured": true, 00:11:38.556 "data_offset": 2048, 00:11:38.556 "data_size": 63488 00:11:38.556 } 00:11:38.556 ] 00:11:38.556 }' 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.556 11:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.816 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:38.816 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:38.816 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:38.816 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:38.816 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:38.816 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:38.816 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:38.816 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.816 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.816 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:38.816 [2024-11-27 11:50:05.137845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.816 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.816 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:38.816 "name": "Existed_Raid", 00:11:38.816 "aliases": [ 00:11:38.816 "35f59ddd-4f95-45e3-8545-2c32ff9065be" 00:11:38.816 ], 00:11:38.816 "product_name": "Raid Volume", 00:11:38.816 "block_size": 512, 00:11:38.816 "num_blocks": 63488, 00:11:38.816 "uuid": "35f59ddd-4f95-45e3-8545-2c32ff9065be", 00:11:38.816 "assigned_rate_limits": { 00:11:38.816 "rw_ios_per_sec": 0, 00:11:38.816 "rw_mbytes_per_sec": 0, 00:11:38.816 "r_mbytes_per_sec": 0, 00:11:38.816 "w_mbytes_per_sec": 0 00:11:38.816 }, 00:11:38.816 "claimed": false, 00:11:38.816 "zoned": false, 00:11:38.816 "supported_io_types": { 00:11:38.816 "read": true, 00:11:38.816 "write": true, 00:11:38.816 "unmap": false, 00:11:38.816 "flush": false, 00:11:38.816 "reset": true, 00:11:38.816 "nvme_admin": false, 00:11:38.816 "nvme_io": false, 00:11:38.816 "nvme_io_md": false, 00:11:38.816 "write_zeroes": true, 00:11:38.816 "zcopy": false, 00:11:38.816 "get_zone_info": false, 00:11:38.816 "zone_management": false, 00:11:38.816 "zone_append": false, 00:11:38.816 "compare": false, 00:11:38.816 "compare_and_write": false, 00:11:38.816 "abort": false, 00:11:38.816 "seek_hole": false, 00:11:38.816 "seek_data": false, 00:11:38.816 "copy": false, 00:11:38.816 "nvme_iov_md": false 00:11:38.816 }, 00:11:38.816 "memory_domains": [ 00:11:38.816 { 00:11:38.816 "dma_device_id": "system", 00:11:38.816 "dma_device_type": 1 00:11:38.816 }, 00:11:38.816 { 00:11:38.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.816 "dma_device_type": 2 00:11:38.816 }, 00:11:38.816 { 00:11:38.816 "dma_device_id": "system", 00:11:38.816 "dma_device_type": 1 00:11:38.816 }, 00:11:38.816 { 00:11:38.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.816 "dma_device_type": 2 00:11:38.816 }, 00:11:38.816 { 00:11:38.816 "dma_device_id": "system", 00:11:38.816 "dma_device_type": 1 00:11:38.816 }, 00:11:38.816 { 00:11:38.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.816 "dma_device_type": 2 00:11:38.816 }, 00:11:38.816 { 00:11:38.816 "dma_device_id": "system", 00:11:38.816 "dma_device_type": 1 00:11:38.816 }, 00:11:38.816 { 00:11:38.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.816 "dma_device_type": 2 00:11:38.816 } 00:11:38.816 ], 00:11:38.816 "driver_specific": { 00:11:38.816 "raid": { 00:11:38.816 "uuid": "35f59ddd-4f95-45e3-8545-2c32ff9065be", 00:11:38.816 "strip_size_kb": 0, 00:11:38.816 "state": "online", 00:11:38.816 "raid_level": "raid1", 00:11:38.816 "superblock": true, 00:11:38.816 "num_base_bdevs": 4, 00:11:38.816 "num_base_bdevs_discovered": 4, 00:11:38.816 "num_base_bdevs_operational": 4, 00:11:38.816 "base_bdevs_list": [ 00:11:38.816 { 00:11:38.816 "name": "BaseBdev1", 00:11:38.816 "uuid": "945668e2-a13a-4f59-9926-60b189772783", 00:11:38.816 "is_configured": true, 00:11:38.816 "data_offset": 2048, 00:11:38.816 "data_size": 63488 00:11:38.816 }, 00:11:38.816 { 00:11:38.816 "name": "BaseBdev2", 00:11:38.816 "uuid": "ca25cba2-c76f-4850-8916-9040fecda9cc", 00:11:38.816 "is_configured": true, 00:11:38.816 "data_offset": 2048, 00:11:38.816 "data_size": 63488 00:11:38.816 }, 00:11:38.816 { 00:11:38.816 "name": "BaseBdev3", 00:11:38.816 "uuid": "f7964452-fd4c-42d3-b2c9-78a52f489196", 00:11:38.816 "is_configured": true, 00:11:38.816 "data_offset": 2048, 00:11:38.816 "data_size": 63488 00:11:38.816 }, 00:11:38.816 { 00:11:38.816 "name": "BaseBdev4", 00:11:38.816 "uuid": "8871fdd5-d6a7-46ba-b2f0-ffc9e1d9b3ee", 00:11:38.816 "is_configured": true, 00:11:38.816 "data_offset": 2048, 00:11:38.816 "data_size": 63488 00:11:38.816 } 00:11:38.816 ] 00:11:38.816 } 00:11:38.816 } 00:11:38.816 }' 00:11:38.816 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:39.076 BaseBdev2 00:11:39.076 BaseBdev3 00:11:39.076 BaseBdev4' 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.076 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.336 [2024-11-27 11:50:05.500942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.336 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.337 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.337 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.337 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.337 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.337 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.337 "name": "Existed_Raid", 00:11:39.337 "uuid": "35f59ddd-4f95-45e3-8545-2c32ff9065be", 00:11:39.337 "strip_size_kb": 0, 00:11:39.337 "state": "online", 00:11:39.337 "raid_level": "raid1", 00:11:39.337 "superblock": true, 00:11:39.337 "num_base_bdevs": 4, 00:11:39.337 "num_base_bdevs_discovered": 3, 00:11:39.337 "num_base_bdevs_operational": 3, 00:11:39.337 "base_bdevs_list": [ 00:11:39.337 { 00:11:39.337 "name": null, 00:11:39.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.337 "is_configured": false, 00:11:39.337 "data_offset": 0, 00:11:39.337 "data_size": 63488 00:11:39.337 }, 00:11:39.337 { 00:11:39.337 "name": "BaseBdev2", 00:11:39.337 "uuid": "ca25cba2-c76f-4850-8916-9040fecda9cc", 00:11:39.337 "is_configured": true, 00:11:39.337 "data_offset": 2048, 00:11:39.337 "data_size": 63488 00:11:39.337 }, 00:11:39.337 { 00:11:39.337 "name": "BaseBdev3", 00:11:39.337 "uuid": "f7964452-fd4c-42d3-b2c9-78a52f489196", 00:11:39.337 "is_configured": true, 00:11:39.337 "data_offset": 2048, 00:11:39.337 "data_size": 63488 00:11:39.337 }, 00:11:39.337 { 00:11:39.337 "name": "BaseBdev4", 00:11:39.337 "uuid": "8871fdd5-d6a7-46ba-b2f0-ffc9e1d9b3ee", 00:11:39.337 "is_configured": true, 00:11:39.337 "data_offset": 2048, 00:11:39.337 "data_size": 63488 00:11:39.337 } 00:11:39.337 ] 00:11:39.337 }' 00:11:39.337 11:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.337 11:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.907 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:39.907 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.907 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.907 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.907 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.907 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.907 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.907 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.907 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.907 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:39.907 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.907 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.907 [2024-11-27 11:50:06.129131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:39.907 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.907 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:39.907 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.908 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.908 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.908 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.908 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.908 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.908 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.908 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.908 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:39.908 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.908 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.908 [2024-11-27 11:50:06.282622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.168 [2024-11-27 11:50:06.432746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:40.168 [2024-11-27 11:50:06.432866] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:40.168 [2024-11-27 11:50:06.530163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.168 [2024-11-27 11:50:06.530230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.168 [2024-11-27 11:50:06.530242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.168 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.429 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:40.429 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:40.429 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:40.429 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:40.429 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.429 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:40.429 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.429 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.429 BaseBdev2 00:11:40.429 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.429 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:40.429 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:40.429 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.430 [ 00:11:40.430 { 00:11:40.430 "name": "BaseBdev2", 00:11:40.430 "aliases": [ 00:11:40.430 "d583212b-6486-42e5-be5a-91a58a8761af" 00:11:40.430 ], 00:11:40.430 "product_name": "Malloc disk", 00:11:40.430 "block_size": 512, 00:11:40.430 "num_blocks": 65536, 00:11:40.430 "uuid": "d583212b-6486-42e5-be5a-91a58a8761af", 00:11:40.430 "assigned_rate_limits": { 00:11:40.430 "rw_ios_per_sec": 0, 00:11:40.430 "rw_mbytes_per_sec": 0, 00:11:40.430 "r_mbytes_per_sec": 0, 00:11:40.430 "w_mbytes_per_sec": 0 00:11:40.430 }, 00:11:40.430 "claimed": false, 00:11:40.430 "zoned": false, 00:11:40.430 "supported_io_types": { 00:11:40.430 "read": true, 00:11:40.430 "write": true, 00:11:40.430 "unmap": true, 00:11:40.430 "flush": true, 00:11:40.430 "reset": true, 00:11:40.430 "nvme_admin": false, 00:11:40.430 "nvme_io": false, 00:11:40.430 "nvme_io_md": false, 00:11:40.430 "write_zeroes": true, 00:11:40.430 "zcopy": true, 00:11:40.430 "get_zone_info": false, 00:11:40.430 "zone_management": false, 00:11:40.430 "zone_append": false, 00:11:40.430 "compare": false, 00:11:40.430 "compare_and_write": false, 00:11:40.430 "abort": true, 00:11:40.430 "seek_hole": false, 00:11:40.430 "seek_data": false, 00:11:40.430 "copy": true, 00:11:40.430 "nvme_iov_md": false 00:11:40.430 }, 00:11:40.430 "memory_domains": [ 00:11:40.430 { 00:11:40.430 "dma_device_id": "system", 00:11:40.430 "dma_device_type": 1 00:11:40.430 }, 00:11:40.430 { 00:11:40.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.430 "dma_device_type": 2 00:11:40.430 } 00:11:40.430 ], 00:11:40.430 "driver_specific": {} 00:11:40.430 } 00:11:40.430 ] 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.430 BaseBdev3 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.430 [ 00:11:40.430 { 00:11:40.430 "name": "BaseBdev3", 00:11:40.430 "aliases": [ 00:11:40.430 "57b2589d-ce1d-4ea1-86bd-beb00b51a4ac" 00:11:40.430 ], 00:11:40.430 "product_name": "Malloc disk", 00:11:40.430 "block_size": 512, 00:11:40.430 "num_blocks": 65536, 00:11:40.430 "uuid": "57b2589d-ce1d-4ea1-86bd-beb00b51a4ac", 00:11:40.430 "assigned_rate_limits": { 00:11:40.430 "rw_ios_per_sec": 0, 00:11:40.430 "rw_mbytes_per_sec": 0, 00:11:40.430 "r_mbytes_per_sec": 0, 00:11:40.430 "w_mbytes_per_sec": 0 00:11:40.430 }, 00:11:40.430 "claimed": false, 00:11:40.430 "zoned": false, 00:11:40.430 "supported_io_types": { 00:11:40.430 "read": true, 00:11:40.430 "write": true, 00:11:40.430 "unmap": true, 00:11:40.430 "flush": true, 00:11:40.430 "reset": true, 00:11:40.430 "nvme_admin": false, 00:11:40.430 "nvme_io": false, 00:11:40.430 "nvme_io_md": false, 00:11:40.430 "write_zeroes": true, 00:11:40.430 "zcopy": true, 00:11:40.430 "get_zone_info": false, 00:11:40.430 "zone_management": false, 00:11:40.430 "zone_append": false, 00:11:40.430 "compare": false, 00:11:40.430 "compare_and_write": false, 00:11:40.430 "abort": true, 00:11:40.430 "seek_hole": false, 00:11:40.430 "seek_data": false, 00:11:40.430 "copy": true, 00:11:40.430 "nvme_iov_md": false 00:11:40.430 }, 00:11:40.430 "memory_domains": [ 00:11:40.430 { 00:11:40.430 "dma_device_id": "system", 00:11:40.430 "dma_device_type": 1 00:11:40.430 }, 00:11:40.430 { 00:11:40.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.430 "dma_device_type": 2 00:11:40.430 } 00:11:40.430 ], 00:11:40.430 "driver_specific": {} 00:11:40.430 } 00:11:40.430 ] 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.430 BaseBdev4 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.430 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.430 [ 00:11:40.430 { 00:11:40.430 "name": "BaseBdev4", 00:11:40.430 "aliases": [ 00:11:40.430 "4e866615-44fd-4ab5-a9e9-06c06311f8dd" 00:11:40.430 ], 00:11:40.430 "product_name": "Malloc disk", 00:11:40.430 "block_size": 512, 00:11:40.430 "num_blocks": 65536, 00:11:40.430 "uuid": "4e866615-44fd-4ab5-a9e9-06c06311f8dd", 00:11:40.430 "assigned_rate_limits": { 00:11:40.430 "rw_ios_per_sec": 0, 00:11:40.430 "rw_mbytes_per_sec": 0, 00:11:40.430 "r_mbytes_per_sec": 0, 00:11:40.430 "w_mbytes_per_sec": 0 00:11:40.430 }, 00:11:40.430 "claimed": false, 00:11:40.430 "zoned": false, 00:11:40.430 "supported_io_types": { 00:11:40.430 "read": true, 00:11:40.430 "write": true, 00:11:40.430 "unmap": true, 00:11:40.430 "flush": true, 00:11:40.430 "reset": true, 00:11:40.430 "nvme_admin": false, 00:11:40.430 "nvme_io": false, 00:11:40.430 "nvme_io_md": false, 00:11:40.430 "write_zeroes": true, 00:11:40.430 "zcopy": true, 00:11:40.430 "get_zone_info": false, 00:11:40.430 "zone_management": false, 00:11:40.430 "zone_append": false, 00:11:40.430 "compare": false, 00:11:40.430 "compare_and_write": false, 00:11:40.430 "abort": true, 00:11:40.430 "seek_hole": false, 00:11:40.430 "seek_data": false, 00:11:40.430 "copy": true, 00:11:40.430 "nvme_iov_md": false 00:11:40.430 }, 00:11:40.430 "memory_domains": [ 00:11:40.430 { 00:11:40.430 "dma_device_id": "system", 00:11:40.430 "dma_device_type": 1 00:11:40.430 }, 00:11:40.431 { 00:11:40.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.431 "dma_device_type": 2 00:11:40.431 } 00:11:40.431 ], 00:11:40.431 "driver_specific": {} 00:11:40.431 } 00:11:40.431 ] 00:11:40.431 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.431 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:40.431 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.431 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.431 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:40.431 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.431 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.690 [2024-11-27 11:50:06.816768] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:40.690 [2024-11-27 11:50:06.816848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:40.690 [2024-11-27 11:50:06.816877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.690 [2024-11-27 11:50:06.818873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.690 [2024-11-27 11:50:06.818929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:40.690 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.690 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.690 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.690 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.690 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.690 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.690 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.690 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.691 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.691 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.691 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.691 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.691 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.691 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.691 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.691 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.691 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.691 "name": "Existed_Raid", 00:11:40.691 "uuid": "63cc5df3-2eaa-4a3d-bb66-b0f4ab456585", 00:11:40.691 "strip_size_kb": 0, 00:11:40.691 "state": "configuring", 00:11:40.691 "raid_level": "raid1", 00:11:40.691 "superblock": true, 00:11:40.691 "num_base_bdevs": 4, 00:11:40.691 "num_base_bdevs_discovered": 3, 00:11:40.691 "num_base_bdevs_operational": 4, 00:11:40.691 "base_bdevs_list": [ 00:11:40.691 { 00:11:40.691 "name": "BaseBdev1", 00:11:40.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.691 "is_configured": false, 00:11:40.691 "data_offset": 0, 00:11:40.691 "data_size": 0 00:11:40.691 }, 00:11:40.691 { 00:11:40.691 "name": "BaseBdev2", 00:11:40.691 "uuid": "d583212b-6486-42e5-be5a-91a58a8761af", 00:11:40.691 "is_configured": true, 00:11:40.691 "data_offset": 2048, 00:11:40.691 "data_size": 63488 00:11:40.691 }, 00:11:40.691 { 00:11:40.691 "name": "BaseBdev3", 00:11:40.691 "uuid": "57b2589d-ce1d-4ea1-86bd-beb00b51a4ac", 00:11:40.691 "is_configured": true, 00:11:40.691 "data_offset": 2048, 00:11:40.691 "data_size": 63488 00:11:40.691 }, 00:11:40.691 { 00:11:40.691 "name": "BaseBdev4", 00:11:40.691 "uuid": "4e866615-44fd-4ab5-a9e9-06c06311f8dd", 00:11:40.691 "is_configured": true, 00:11:40.691 "data_offset": 2048, 00:11:40.691 "data_size": 63488 00:11:40.691 } 00:11:40.691 ] 00:11:40.691 }' 00:11:40.691 11:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.691 11:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.950 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:40.950 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.950 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.950 [2024-11-27 11:50:07.307916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:40.950 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.950 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.950 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.950 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.950 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.950 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.950 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.950 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.950 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.950 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.950 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.950 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.950 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.950 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.950 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.209 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.209 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.209 "name": "Existed_Raid", 00:11:41.209 "uuid": "63cc5df3-2eaa-4a3d-bb66-b0f4ab456585", 00:11:41.209 "strip_size_kb": 0, 00:11:41.209 "state": "configuring", 00:11:41.209 "raid_level": "raid1", 00:11:41.209 "superblock": true, 00:11:41.209 "num_base_bdevs": 4, 00:11:41.209 "num_base_bdevs_discovered": 2, 00:11:41.209 "num_base_bdevs_operational": 4, 00:11:41.209 "base_bdevs_list": [ 00:11:41.209 { 00:11:41.209 "name": "BaseBdev1", 00:11:41.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.209 "is_configured": false, 00:11:41.209 "data_offset": 0, 00:11:41.209 "data_size": 0 00:11:41.209 }, 00:11:41.209 { 00:11:41.209 "name": null, 00:11:41.209 "uuid": "d583212b-6486-42e5-be5a-91a58a8761af", 00:11:41.209 "is_configured": false, 00:11:41.209 "data_offset": 0, 00:11:41.209 "data_size": 63488 00:11:41.209 }, 00:11:41.209 { 00:11:41.209 "name": "BaseBdev3", 00:11:41.209 "uuid": "57b2589d-ce1d-4ea1-86bd-beb00b51a4ac", 00:11:41.209 "is_configured": true, 00:11:41.209 "data_offset": 2048, 00:11:41.209 "data_size": 63488 00:11:41.209 }, 00:11:41.209 { 00:11:41.209 "name": "BaseBdev4", 00:11:41.209 "uuid": "4e866615-44fd-4ab5-a9e9-06c06311f8dd", 00:11:41.209 "is_configured": true, 00:11:41.209 "data_offset": 2048, 00:11:41.209 "data_size": 63488 00:11:41.209 } 00:11:41.209 ] 00:11:41.209 }' 00:11:41.209 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.209 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.468 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.468 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.468 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.468 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.468 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.468 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:41.468 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:41.468 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.468 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.727 [2024-11-27 11:50:07.883698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.727 BaseBdev1 00:11:41.727 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.727 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:41.727 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:41.727 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.727 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:41.727 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.727 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.727 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:41.727 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.727 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.727 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.727 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:41.727 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.727 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.727 [ 00:11:41.727 { 00:11:41.727 "name": "BaseBdev1", 00:11:41.727 "aliases": [ 00:11:41.727 "b0b2a066-6af7-41a1-8ec8-f3d5aca0c5b7" 00:11:41.727 ], 00:11:41.727 "product_name": "Malloc disk", 00:11:41.727 "block_size": 512, 00:11:41.727 "num_blocks": 65536, 00:11:41.727 "uuid": "b0b2a066-6af7-41a1-8ec8-f3d5aca0c5b7", 00:11:41.727 "assigned_rate_limits": { 00:11:41.727 "rw_ios_per_sec": 0, 00:11:41.727 "rw_mbytes_per_sec": 0, 00:11:41.727 "r_mbytes_per_sec": 0, 00:11:41.727 "w_mbytes_per_sec": 0 00:11:41.727 }, 00:11:41.727 "claimed": true, 00:11:41.727 "claim_type": "exclusive_write", 00:11:41.727 "zoned": false, 00:11:41.727 "supported_io_types": { 00:11:41.727 "read": true, 00:11:41.727 "write": true, 00:11:41.727 "unmap": true, 00:11:41.727 "flush": true, 00:11:41.727 "reset": true, 00:11:41.727 "nvme_admin": false, 00:11:41.728 "nvme_io": false, 00:11:41.728 "nvme_io_md": false, 00:11:41.728 "write_zeroes": true, 00:11:41.728 "zcopy": true, 00:11:41.728 "get_zone_info": false, 00:11:41.728 "zone_management": false, 00:11:41.728 "zone_append": false, 00:11:41.728 "compare": false, 00:11:41.728 "compare_and_write": false, 00:11:41.728 "abort": true, 00:11:41.728 "seek_hole": false, 00:11:41.728 "seek_data": false, 00:11:41.728 "copy": true, 00:11:41.728 "nvme_iov_md": false 00:11:41.728 }, 00:11:41.728 "memory_domains": [ 00:11:41.728 { 00:11:41.728 "dma_device_id": "system", 00:11:41.728 "dma_device_type": 1 00:11:41.728 }, 00:11:41.728 { 00:11:41.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.728 "dma_device_type": 2 00:11:41.728 } 00:11:41.728 ], 00:11:41.728 "driver_specific": {} 00:11:41.728 } 00:11:41.728 ] 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.728 "name": "Existed_Raid", 00:11:41.728 "uuid": "63cc5df3-2eaa-4a3d-bb66-b0f4ab456585", 00:11:41.728 "strip_size_kb": 0, 00:11:41.728 "state": "configuring", 00:11:41.728 "raid_level": "raid1", 00:11:41.728 "superblock": true, 00:11:41.728 "num_base_bdevs": 4, 00:11:41.728 "num_base_bdevs_discovered": 3, 00:11:41.728 "num_base_bdevs_operational": 4, 00:11:41.728 "base_bdevs_list": [ 00:11:41.728 { 00:11:41.728 "name": "BaseBdev1", 00:11:41.728 "uuid": "b0b2a066-6af7-41a1-8ec8-f3d5aca0c5b7", 00:11:41.728 "is_configured": true, 00:11:41.728 "data_offset": 2048, 00:11:41.728 "data_size": 63488 00:11:41.728 }, 00:11:41.728 { 00:11:41.728 "name": null, 00:11:41.728 "uuid": "d583212b-6486-42e5-be5a-91a58a8761af", 00:11:41.728 "is_configured": false, 00:11:41.728 "data_offset": 0, 00:11:41.728 "data_size": 63488 00:11:41.728 }, 00:11:41.728 { 00:11:41.728 "name": "BaseBdev3", 00:11:41.728 "uuid": "57b2589d-ce1d-4ea1-86bd-beb00b51a4ac", 00:11:41.728 "is_configured": true, 00:11:41.728 "data_offset": 2048, 00:11:41.728 "data_size": 63488 00:11:41.728 }, 00:11:41.728 { 00:11:41.728 "name": "BaseBdev4", 00:11:41.728 "uuid": "4e866615-44fd-4ab5-a9e9-06c06311f8dd", 00:11:41.728 "is_configured": true, 00:11:41.728 "data_offset": 2048, 00:11:41.728 "data_size": 63488 00:11:41.728 } 00:11:41.728 ] 00:11:41.728 }' 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.728 11:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.988 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.988 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.988 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.988 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.248 [2024-11-27 11:50:08.395296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.248 "name": "Existed_Raid", 00:11:42.248 "uuid": "63cc5df3-2eaa-4a3d-bb66-b0f4ab456585", 00:11:42.248 "strip_size_kb": 0, 00:11:42.248 "state": "configuring", 00:11:42.248 "raid_level": "raid1", 00:11:42.248 "superblock": true, 00:11:42.248 "num_base_bdevs": 4, 00:11:42.248 "num_base_bdevs_discovered": 2, 00:11:42.248 "num_base_bdevs_operational": 4, 00:11:42.248 "base_bdevs_list": [ 00:11:42.248 { 00:11:42.248 "name": "BaseBdev1", 00:11:42.248 "uuid": "b0b2a066-6af7-41a1-8ec8-f3d5aca0c5b7", 00:11:42.248 "is_configured": true, 00:11:42.248 "data_offset": 2048, 00:11:42.248 "data_size": 63488 00:11:42.248 }, 00:11:42.248 { 00:11:42.248 "name": null, 00:11:42.248 "uuid": "d583212b-6486-42e5-be5a-91a58a8761af", 00:11:42.248 "is_configured": false, 00:11:42.248 "data_offset": 0, 00:11:42.248 "data_size": 63488 00:11:42.248 }, 00:11:42.248 { 00:11:42.248 "name": null, 00:11:42.248 "uuid": "57b2589d-ce1d-4ea1-86bd-beb00b51a4ac", 00:11:42.248 "is_configured": false, 00:11:42.248 "data_offset": 0, 00:11:42.248 "data_size": 63488 00:11:42.248 }, 00:11:42.248 { 00:11:42.248 "name": "BaseBdev4", 00:11:42.248 "uuid": "4e866615-44fd-4ab5-a9e9-06c06311f8dd", 00:11:42.248 "is_configured": true, 00:11:42.248 "data_offset": 2048, 00:11:42.248 "data_size": 63488 00:11:42.248 } 00:11:42.248 ] 00:11:42.248 }' 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.248 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.507 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.507 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:42.507 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.507 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.767 [2024-11-27 11:50:08.930353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.767 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.767 "name": "Existed_Raid", 00:11:42.767 "uuid": "63cc5df3-2eaa-4a3d-bb66-b0f4ab456585", 00:11:42.767 "strip_size_kb": 0, 00:11:42.767 "state": "configuring", 00:11:42.767 "raid_level": "raid1", 00:11:42.767 "superblock": true, 00:11:42.767 "num_base_bdevs": 4, 00:11:42.767 "num_base_bdevs_discovered": 3, 00:11:42.767 "num_base_bdevs_operational": 4, 00:11:42.767 "base_bdevs_list": [ 00:11:42.767 { 00:11:42.767 "name": "BaseBdev1", 00:11:42.767 "uuid": "b0b2a066-6af7-41a1-8ec8-f3d5aca0c5b7", 00:11:42.767 "is_configured": true, 00:11:42.767 "data_offset": 2048, 00:11:42.767 "data_size": 63488 00:11:42.767 }, 00:11:42.767 { 00:11:42.767 "name": null, 00:11:42.767 "uuid": "d583212b-6486-42e5-be5a-91a58a8761af", 00:11:42.767 "is_configured": false, 00:11:42.767 "data_offset": 0, 00:11:42.767 "data_size": 63488 00:11:42.767 }, 00:11:42.767 { 00:11:42.767 "name": "BaseBdev3", 00:11:42.767 "uuid": "57b2589d-ce1d-4ea1-86bd-beb00b51a4ac", 00:11:42.767 "is_configured": true, 00:11:42.767 "data_offset": 2048, 00:11:42.767 "data_size": 63488 00:11:42.767 }, 00:11:42.767 { 00:11:42.767 "name": "BaseBdev4", 00:11:42.768 "uuid": "4e866615-44fd-4ab5-a9e9-06c06311f8dd", 00:11:42.768 "is_configured": true, 00:11:42.768 "data_offset": 2048, 00:11:42.768 "data_size": 63488 00:11:42.768 } 00:11:42.768 ] 00:11:42.768 }' 00:11:42.768 11:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.768 11:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.026 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.026 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:43.026 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.026 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.026 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.026 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:43.026 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:43.026 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.026 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.026 [2024-11-27 11:50:09.377629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:43.285 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.285 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:43.285 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.285 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.285 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.285 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.285 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.285 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.285 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.285 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.285 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.285 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.285 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.285 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.285 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.285 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.285 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.285 "name": "Existed_Raid", 00:11:43.285 "uuid": "63cc5df3-2eaa-4a3d-bb66-b0f4ab456585", 00:11:43.285 "strip_size_kb": 0, 00:11:43.285 "state": "configuring", 00:11:43.285 "raid_level": "raid1", 00:11:43.285 "superblock": true, 00:11:43.285 "num_base_bdevs": 4, 00:11:43.285 "num_base_bdevs_discovered": 2, 00:11:43.285 "num_base_bdevs_operational": 4, 00:11:43.285 "base_bdevs_list": [ 00:11:43.285 { 00:11:43.285 "name": null, 00:11:43.285 "uuid": "b0b2a066-6af7-41a1-8ec8-f3d5aca0c5b7", 00:11:43.285 "is_configured": false, 00:11:43.285 "data_offset": 0, 00:11:43.285 "data_size": 63488 00:11:43.285 }, 00:11:43.285 { 00:11:43.285 "name": null, 00:11:43.285 "uuid": "d583212b-6486-42e5-be5a-91a58a8761af", 00:11:43.285 "is_configured": false, 00:11:43.285 "data_offset": 0, 00:11:43.285 "data_size": 63488 00:11:43.285 }, 00:11:43.285 { 00:11:43.285 "name": "BaseBdev3", 00:11:43.285 "uuid": "57b2589d-ce1d-4ea1-86bd-beb00b51a4ac", 00:11:43.285 "is_configured": true, 00:11:43.285 "data_offset": 2048, 00:11:43.285 "data_size": 63488 00:11:43.285 }, 00:11:43.285 { 00:11:43.285 "name": "BaseBdev4", 00:11:43.285 "uuid": "4e866615-44fd-4ab5-a9e9-06c06311f8dd", 00:11:43.285 "is_configured": true, 00:11:43.285 "data_offset": 2048, 00:11:43.285 "data_size": 63488 00:11:43.286 } 00:11:43.286 ] 00:11:43.286 }' 00:11:43.286 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.286 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.545 [2024-11-27 11:50:09.895707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.545 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.804 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.804 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.804 "name": "Existed_Raid", 00:11:43.804 "uuid": "63cc5df3-2eaa-4a3d-bb66-b0f4ab456585", 00:11:43.804 "strip_size_kb": 0, 00:11:43.804 "state": "configuring", 00:11:43.804 "raid_level": "raid1", 00:11:43.804 "superblock": true, 00:11:43.804 "num_base_bdevs": 4, 00:11:43.804 "num_base_bdevs_discovered": 3, 00:11:43.804 "num_base_bdevs_operational": 4, 00:11:43.804 "base_bdevs_list": [ 00:11:43.804 { 00:11:43.804 "name": null, 00:11:43.804 "uuid": "b0b2a066-6af7-41a1-8ec8-f3d5aca0c5b7", 00:11:43.804 "is_configured": false, 00:11:43.804 "data_offset": 0, 00:11:43.804 "data_size": 63488 00:11:43.804 }, 00:11:43.804 { 00:11:43.804 "name": "BaseBdev2", 00:11:43.804 "uuid": "d583212b-6486-42e5-be5a-91a58a8761af", 00:11:43.804 "is_configured": true, 00:11:43.804 "data_offset": 2048, 00:11:43.804 "data_size": 63488 00:11:43.804 }, 00:11:43.804 { 00:11:43.804 "name": "BaseBdev3", 00:11:43.804 "uuid": "57b2589d-ce1d-4ea1-86bd-beb00b51a4ac", 00:11:43.804 "is_configured": true, 00:11:43.804 "data_offset": 2048, 00:11:43.804 "data_size": 63488 00:11:43.804 }, 00:11:43.804 { 00:11:43.804 "name": "BaseBdev4", 00:11:43.804 "uuid": "4e866615-44fd-4ab5-a9e9-06c06311f8dd", 00:11:43.804 "is_configured": true, 00:11:43.804 "data_offset": 2048, 00:11:43.804 "data_size": 63488 00:11:43.804 } 00:11:43.804 ] 00:11:43.804 }' 00:11:43.804 11:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.804 11:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.064 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:44.064 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.064 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.064 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.064 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.064 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:44.064 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:44.064 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.064 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.064 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.064 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.064 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b0b2a066-6af7-41a1-8ec8-f3d5aca0c5b7 00:11:44.065 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.065 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.325 [2024-11-27 11:50:10.461346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:44.325 [2024-11-27 11:50:10.461594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:44.325 [2024-11-27 11:50:10.461613] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:44.325 [2024-11-27 11:50:10.461912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:44.325 [2024-11-27 11:50:10.462102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:44.325 [2024-11-27 11:50:10.462122] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:44.325 NewBaseBdev 00:11:44.325 [2024-11-27 11:50:10.462283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.325 [ 00:11:44.325 { 00:11:44.325 "name": "NewBaseBdev", 00:11:44.325 "aliases": [ 00:11:44.325 "b0b2a066-6af7-41a1-8ec8-f3d5aca0c5b7" 00:11:44.325 ], 00:11:44.325 "product_name": "Malloc disk", 00:11:44.325 "block_size": 512, 00:11:44.325 "num_blocks": 65536, 00:11:44.325 "uuid": "b0b2a066-6af7-41a1-8ec8-f3d5aca0c5b7", 00:11:44.325 "assigned_rate_limits": { 00:11:44.325 "rw_ios_per_sec": 0, 00:11:44.325 "rw_mbytes_per_sec": 0, 00:11:44.325 "r_mbytes_per_sec": 0, 00:11:44.325 "w_mbytes_per_sec": 0 00:11:44.325 }, 00:11:44.325 "claimed": true, 00:11:44.325 "claim_type": "exclusive_write", 00:11:44.325 "zoned": false, 00:11:44.325 "supported_io_types": { 00:11:44.325 "read": true, 00:11:44.325 "write": true, 00:11:44.325 "unmap": true, 00:11:44.325 "flush": true, 00:11:44.325 "reset": true, 00:11:44.325 "nvme_admin": false, 00:11:44.325 "nvme_io": false, 00:11:44.325 "nvme_io_md": false, 00:11:44.325 "write_zeroes": true, 00:11:44.325 "zcopy": true, 00:11:44.325 "get_zone_info": false, 00:11:44.325 "zone_management": false, 00:11:44.325 "zone_append": false, 00:11:44.325 "compare": false, 00:11:44.325 "compare_and_write": false, 00:11:44.325 "abort": true, 00:11:44.325 "seek_hole": false, 00:11:44.325 "seek_data": false, 00:11:44.325 "copy": true, 00:11:44.325 "nvme_iov_md": false 00:11:44.325 }, 00:11:44.325 "memory_domains": [ 00:11:44.325 { 00:11:44.325 "dma_device_id": "system", 00:11:44.325 "dma_device_type": 1 00:11:44.325 }, 00:11:44.325 { 00:11:44.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.325 "dma_device_type": 2 00:11:44.325 } 00:11:44.325 ], 00:11:44.325 "driver_specific": {} 00:11:44.325 } 00:11:44.325 ] 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.325 "name": "Existed_Raid", 00:11:44.325 "uuid": "63cc5df3-2eaa-4a3d-bb66-b0f4ab456585", 00:11:44.325 "strip_size_kb": 0, 00:11:44.325 "state": "online", 00:11:44.325 "raid_level": "raid1", 00:11:44.325 "superblock": true, 00:11:44.325 "num_base_bdevs": 4, 00:11:44.325 "num_base_bdevs_discovered": 4, 00:11:44.325 "num_base_bdevs_operational": 4, 00:11:44.325 "base_bdevs_list": [ 00:11:44.325 { 00:11:44.325 "name": "NewBaseBdev", 00:11:44.325 "uuid": "b0b2a066-6af7-41a1-8ec8-f3d5aca0c5b7", 00:11:44.325 "is_configured": true, 00:11:44.325 "data_offset": 2048, 00:11:44.325 "data_size": 63488 00:11:44.325 }, 00:11:44.325 { 00:11:44.325 "name": "BaseBdev2", 00:11:44.325 "uuid": "d583212b-6486-42e5-be5a-91a58a8761af", 00:11:44.325 "is_configured": true, 00:11:44.325 "data_offset": 2048, 00:11:44.325 "data_size": 63488 00:11:44.325 }, 00:11:44.325 { 00:11:44.325 "name": "BaseBdev3", 00:11:44.325 "uuid": "57b2589d-ce1d-4ea1-86bd-beb00b51a4ac", 00:11:44.325 "is_configured": true, 00:11:44.325 "data_offset": 2048, 00:11:44.325 "data_size": 63488 00:11:44.325 }, 00:11:44.325 { 00:11:44.325 "name": "BaseBdev4", 00:11:44.325 "uuid": "4e866615-44fd-4ab5-a9e9-06c06311f8dd", 00:11:44.325 "is_configured": true, 00:11:44.325 "data_offset": 2048, 00:11:44.325 "data_size": 63488 00:11:44.325 } 00:11:44.325 ] 00:11:44.325 }' 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.325 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.585 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:44.585 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:44.585 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:44.585 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:44.585 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:44.585 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:44.585 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:44.585 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.585 11:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:44.585 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.844 [2024-11-27 11:50:10.972959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.844 11:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.844 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:44.844 "name": "Existed_Raid", 00:11:44.844 "aliases": [ 00:11:44.844 "63cc5df3-2eaa-4a3d-bb66-b0f4ab456585" 00:11:44.844 ], 00:11:44.844 "product_name": "Raid Volume", 00:11:44.844 "block_size": 512, 00:11:44.844 "num_blocks": 63488, 00:11:44.844 "uuid": "63cc5df3-2eaa-4a3d-bb66-b0f4ab456585", 00:11:44.844 "assigned_rate_limits": { 00:11:44.844 "rw_ios_per_sec": 0, 00:11:44.844 "rw_mbytes_per_sec": 0, 00:11:44.844 "r_mbytes_per_sec": 0, 00:11:44.844 "w_mbytes_per_sec": 0 00:11:44.844 }, 00:11:44.844 "claimed": false, 00:11:44.844 "zoned": false, 00:11:44.844 "supported_io_types": { 00:11:44.844 "read": true, 00:11:44.844 "write": true, 00:11:44.844 "unmap": false, 00:11:44.844 "flush": false, 00:11:44.844 "reset": true, 00:11:44.844 "nvme_admin": false, 00:11:44.844 "nvme_io": false, 00:11:44.844 "nvme_io_md": false, 00:11:44.844 "write_zeroes": true, 00:11:44.844 "zcopy": false, 00:11:44.844 "get_zone_info": false, 00:11:44.844 "zone_management": false, 00:11:44.844 "zone_append": false, 00:11:44.844 "compare": false, 00:11:44.844 "compare_and_write": false, 00:11:44.844 "abort": false, 00:11:44.844 "seek_hole": false, 00:11:44.844 "seek_data": false, 00:11:44.844 "copy": false, 00:11:44.844 "nvme_iov_md": false 00:11:44.844 }, 00:11:44.844 "memory_domains": [ 00:11:44.844 { 00:11:44.844 "dma_device_id": "system", 00:11:44.844 "dma_device_type": 1 00:11:44.844 }, 00:11:44.844 { 00:11:44.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.844 "dma_device_type": 2 00:11:44.844 }, 00:11:44.844 { 00:11:44.844 "dma_device_id": "system", 00:11:44.844 "dma_device_type": 1 00:11:44.844 }, 00:11:44.844 { 00:11:44.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.844 "dma_device_type": 2 00:11:44.844 }, 00:11:44.844 { 00:11:44.844 "dma_device_id": "system", 00:11:44.844 "dma_device_type": 1 00:11:44.844 }, 00:11:44.844 { 00:11:44.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.844 "dma_device_type": 2 00:11:44.844 }, 00:11:44.844 { 00:11:44.844 "dma_device_id": "system", 00:11:44.844 "dma_device_type": 1 00:11:44.844 }, 00:11:44.844 { 00:11:44.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.844 "dma_device_type": 2 00:11:44.844 } 00:11:44.844 ], 00:11:44.844 "driver_specific": { 00:11:44.844 "raid": { 00:11:44.844 "uuid": "63cc5df3-2eaa-4a3d-bb66-b0f4ab456585", 00:11:44.844 "strip_size_kb": 0, 00:11:44.844 "state": "online", 00:11:44.844 "raid_level": "raid1", 00:11:44.844 "superblock": true, 00:11:44.844 "num_base_bdevs": 4, 00:11:44.844 "num_base_bdevs_discovered": 4, 00:11:44.844 "num_base_bdevs_operational": 4, 00:11:44.844 "base_bdevs_list": [ 00:11:44.844 { 00:11:44.844 "name": "NewBaseBdev", 00:11:44.844 "uuid": "b0b2a066-6af7-41a1-8ec8-f3d5aca0c5b7", 00:11:44.844 "is_configured": true, 00:11:44.844 "data_offset": 2048, 00:11:44.844 "data_size": 63488 00:11:44.844 }, 00:11:44.844 { 00:11:44.844 "name": "BaseBdev2", 00:11:44.844 "uuid": "d583212b-6486-42e5-be5a-91a58a8761af", 00:11:44.844 "is_configured": true, 00:11:44.844 "data_offset": 2048, 00:11:44.844 "data_size": 63488 00:11:44.844 }, 00:11:44.844 { 00:11:44.844 "name": "BaseBdev3", 00:11:44.844 "uuid": "57b2589d-ce1d-4ea1-86bd-beb00b51a4ac", 00:11:44.844 "is_configured": true, 00:11:44.844 "data_offset": 2048, 00:11:44.844 "data_size": 63488 00:11:44.844 }, 00:11:44.844 { 00:11:44.844 "name": "BaseBdev4", 00:11:44.844 "uuid": "4e866615-44fd-4ab5-a9e9-06c06311f8dd", 00:11:44.844 "is_configured": true, 00:11:44.844 "data_offset": 2048, 00:11:44.844 "data_size": 63488 00:11:44.844 } 00:11:44.844 ] 00:11:44.844 } 00:11:44.844 } 00:11:44.844 }' 00:11:44.844 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:44.844 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:44.844 BaseBdev2 00:11:44.844 BaseBdev3 00:11:44.844 BaseBdev4' 00:11:44.844 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.844 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:44.844 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.844 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:44.844 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.844 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.844 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.844 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.844 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.844 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.844 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.844 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.844 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:44.844 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.844 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.845 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.845 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.845 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.845 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.845 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.845 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:44.845 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.845 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.845 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.845 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.845 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.845 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.108 [2024-11-27 11:50:11.284041] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:45.108 [2024-11-27 11:50:11.284075] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.108 [2024-11-27 11:50:11.284174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.108 [2024-11-27 11:50:11.284503] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:45.108 [2024-11-27 11:50:11.284527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73853 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73853 ']' 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73853 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73853 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:45.108 killing process with pid 73853 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73853' 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73853 00:11:45.108 [2024-11-27 11:50:11.331558] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:45.108 11:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73853 00:11:45.375 [2024-11-27 11:50:11.742561] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:46.753 11:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:46.753 00:11:46.753 real 0m11.738s 00:11:46.753 user 0m18.671s 00:11:46.753 sys 0m2.089s 00:11:46.753 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.753 11:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.753 ************************************ 00:11:46.753 END TEST raid_state_function_test_sb 00:11:46.753 ************************************ 00:11:46.753 11:50:12 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:46.753 11:50:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:46.753 11:50:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.753 11:50:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:46.753 ************************************ 00:11:46.753 START TEST raid_superblock_test 00:11:46.753 ************************************ 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74529 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74529 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74529 ']' 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.753 11:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.753 [2024-11-27 11:50:13.043635] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:11:46.753 [2024-11-27 11:50:13.043755] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74529 ] 00:11:47.012 [2024-11-27 11:50:13.218078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.012 [2024-11-27 11:50:13.332725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.270 [2024-11-27 11:50:13.537356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.270 [2024-11-27 11:50:13.537414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.528 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.529 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:47.529 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:47.529 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.529 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:47.529 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:47.529 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:47.529 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.529 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.529 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.529 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:47.529 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.529 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.789 malloc1 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.789 [2024-11-27 11:50:13.937842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:47.789 [2024-11-27 11:50:13.937912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.789 [2024-11-27 11:50:13.937950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:47.789 [2024-11-27 11:50:13.937960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.789 [2024-11-27 11:50:13.940132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.789 [2024-11-27 11:50:13.940169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:47.789 pt1 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.789 malloc2 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.789 [2024-11-27 11:50:13.992535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.789 [2024-11-27 11:50:13.992597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.789 [2024-11-27 11:50:13.992623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:47.789 [2024-11-27 11:50:13.992632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.789 [2024-11-27 11:50:13.994711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.789 [2024-11-27 11:50:13.994744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.789 pt2 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.789 11:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.789 malloc3 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.789 [2024-11-27 11:50:14.058644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:47.789 [2024-11-27 11:50:14.058715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.789 [2024-11-27 11:50:14.058737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:47.789 [2024-11-27 11:50:14.058746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.789 [2024-11-27 11:50:14.060852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.789 [2024-11-27 11:50:14.060883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:47.789 pt3 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.789 malloc4 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.789 [2024-11-27 11:50:14.115697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:47.789 [2024-11-27 11:50:14.115761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.789 [2024-11-27 11:50:14.115784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:47.789 [2024-11-27 11:50:14.115794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.789 [2024-11-27 11:50:14.117936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.789 [2024-11-27 11:50:14.117968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:47.789 pt4 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.789 [2024-11-27 11:50:14.127717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:47.789 [2024-11-27 11:50:14.129649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.789 [2024-11-27 11:50:14.129717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:47.789 [2024-11-27 11:50:14.129780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:47.789 [2024-11-27 11:50:14.129986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:47.789 [2024-11-27 11:50:14.130011] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:47.789 [2024-11-27 11:50:14.130278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:47.789 [2024-11-27 11:50:14.130467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:47.789 [2024-11-27 11:50:14.130490] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:47.789 [2024-11-27 11:50:14.130672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.789 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.790 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.790 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.790 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.049 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.049 "name": "raid_bdev1", 00:11:48.049 "uuid": "51d80f8f-2683-4bc9-8167-2b036b34dd2f", 00:11:48.049 "strip_size_kb": 0, 00:11:48.049 "state": "online", 00:11:48.049 "raid_level": "raid1", 00:11:48.049 "superblock": true, 00:11:48.049 "num_base_bdevs": 4, 00:11:48.049 "num_base_bdevs_discovered": 4, 00:11:48.049 "num_base_bdevs_operational": 4, 00:11:48.049 "base_bdevs_list": [ 00:11:48.049 { 00:11:48.049 "name": "pt1", 00:11:48.049 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.049 "is_configured": true, 00:11:48.049 "data_offset": 2048, 00:11:48.049 "data_size": 63488 00:11:48.049 }, 00:11:48.049 { 00:11:48.049 "name": "pt2", 00:11:48.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.049 "is_configured": true, 00:11:48.049 "data_offset": 2048, 00:11:48.049 "data_size": 63488 00:11:48.049 }, 00:11:48.049 { 00:11:48.049 "name": "pt3", 00:11:48.049 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.049 "is_configured": true, 00:11:48.049 "data_offset": 2048, 00:11:48.049 "data_size": 63488 00:11:48.049 }, 00:11:48.049 { 00:11:48.049 "name": "pt4", 00:11:48.049 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.049 "is_configured": true, 00:11:48.049 "data_offset": 2048, 00:11:48.049 "data_size": 63488 00:11:48.049 } 00:11:48.049 ] 00:11:48.049 }' 00:11:48.049 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.049 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.307 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:48.307 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:48.307 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.307 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.307 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.307 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.307 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.307 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.307 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.307 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.307 [2024-11-27 11:50:14.567548] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.307 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.307 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.307 "name": "raid_bdev1", 00:11:48.307 "aliases": [ 00:11:48.307 "51d80f8f-2683-4bc9-8167-2b036b34dd2f" 00:11:48.307 ], 00:11:48.307 "product_name": "Raid Volume", 00:11:48.307 "block_size": 512, 00:11:48.307 "num_blocks": 63488, 00:11:48.307 "uuid": "51d80f8f-2683-4bc9-8167-2b036b34dd2f", 00:11:48.307 "assigned_rate_limits": { 00:11:48.307 "rw_ios_per_sec": 0, 00:11:48.307 "rw_mbytes_per_sec": 0, 00:11:48.307 "r_mbytes_per_sec": 0, 00:11:48.307 "w_mbytes_per_sec": 0 00:11:48.307 }, 00:11:48.307 "claimed": false, 00:11:48.307 "zoned": false, 00:11:48.307 "supported_io_types": { 00:11:48.307 "read": true, 00:11:48.307 "write": true, 00:11:48.307 "unmap": false, 00:11:48.307 "flush": false, 00:11:48.307 "reset": true, 00:11:48.307 "nvme_admin": false, 00:11:48.307 "nvme_io": false, 00:11:48.307 "nvme_io_md": false, 00:11:48.307 "write_zeroes": true, 00:11:48.307 "zcopy": false, 00:11:48.307 "get_zone_info": false, 00:11:48.307 "zone_management": false, 00:11:48.307 "zone_append": false, 00:11:48.307 "compare": false, 00:11:48.307 "compare_and_write": false, 00:11:48.307 "abort": false, 00:11:48.307 "seek_hole": false, 00:11:48.307 "seek_data": false, 00:11:48.307 "copy": false, 00:11:48.307 "nvme_iov_md": false 00:11:48.307 }, 00:11:48.307 "memory_domains": [ 00:11:48.307 { 00:11:48.307 "dma_device_id": "system", 00:11:48.307 "dma_device_type": 1 00:11:48.307 }, 00:11:48.307 { 00:11:48.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.307 "dma_device_type": 2 00:11:48.307 }, 00:11:48.307 { 00:11:48.307 "dma_device_id": "system", 00:11:48.307 "dma_device_type": 1 00:11:48.307 }, 00:11:48.307 { 00:11:48.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.307 "dma_device_type": 2 00:11:48.307 }, 00:11:48.307 { 00:11:48.307 "dma_device_id": "system", 00:11:48.307 "dma_device_type": 1 00:11:48.307 }, 00:11:48.307 { 00:11:48.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.307 "dma_device_type": 2 00:11:48.307 }, 00:11:48.307 { 00:11:48.307 "dma_device_id": "system", 00:11:48.307 "dma_device_type": 1 00:11:48.307 }, 00:11:48.307 { 00:11:48.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.307 "dma_device_type": 2 00:11:48.307 } 00:11:48.307 ], 00:11:48.307 "driver_specific": { 00:11:48.307 "raid": { 00:11:48.307 "uuid": "51d80f8f-2683-4bc9-8167-2b036b34dd2f", 00:11:48.307 "strip_size_kb": 0, 00:11:48.307 "state": "online", 00:11:48.307 "raid_level": "raid1", 00:11:48.307 "superblock": true, 00:11:48.307 "num_base_bdevs": 4, 00:11:48.307 "num_base_bdevs_discovered": 4, 00:11:48.307 "num_base_bdevs_operational": 4, 00:11:48.307 "base_bdevs_list": [ 00:11:48.307 { 00:11:48.307 "name": "pt1", 00:11:48.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.307 "is_configured": true, 00:11:48.307 "data_offset": 2048, 00:11:48.307 "data_size": 63488 00:11:48.307 }, 00:11:48.307 { 00:11:48.307 "name": "pt2", 00:11:48.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.307 "is_configured": true, 00:11:48.307 "data_offset": 2048, 00:11:48.307 "data_size": 63488 00:11:48.307 }, 00:11:48.307 { 00:11:48.307 "name": "pt3", 00:11:48.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.307 "is_configured": true, 00:11:48.307 "data_offset": 2048, 00:11:48.307 "data_size": 63488 00:11:48.307 }, 00:11:48.307 { 00:11:48.307 "name": "pt4", 00:11:48.307 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.307 "is_configured": true, 00:11:48.307 "data_offset": 2048, 00:11:48.307 "data_size": 63488 00:11:48.307 } 00:11:48.307 ] 00:11:48.307 } 00:11:48.307 } 00:11:48.307 }' 00:11:48.307 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.307 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:48.307 pt2 00:11:48.307 pt3 00:11:48.307 pt4' 00:11:48.307 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.307 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.308 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.308 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:48.308 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.308 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.308 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:48.566 [2024-11-27 11:50:14.874851] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=51d80f8f-2683-4bc9-8167-2b036b34dd2f 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 51d80f8f-2683-4bc9-8167-2b036b34dd2f ']' 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.566 [2024-11-27 11:50:14.922401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.566 [2024-11-27 11:50:14.922440] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.566 [2024-11-27 11:50:14.922635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.566 [2024-11-27 11:50:14.922778] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.566 [2024-11-27 11:50:14.922814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:48.566 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.854 11:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.854 [2024-11-27 11:50:15.062161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:48.854 [2024-11-27 11:50:15.064254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:48.854 [2024-11-27 11:50:15.064331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:48.854 [2024-11-27 11:50:15.064378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:48.854 [2024-11-27 11:50:15.064463] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:48.854 [2024-11-27 11:50:15.064527] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:48.854 [2024-11-27 11:50:15.064551] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:48.854 [2024-11-27 11:50:15.064577] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:48.854 [2024-11-27 11:50:15.064594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.854 [2024-11-27 11:50:15.064607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:48.854 request: 00:11:48.854 { 00:11:48.854 "name": "raid_bdev1", 00:11:48.854 "raid_level": "raid1", 00:11:48.854 "base_bdevs": [ 00:11:48.854 "malloc1", 00:11:48.854 "malloc2", 00:11:48.854 "malloc3", 00:11:48.854 "malloc4" 00:11:48.854 ], 00:11:48.854 "superblock": false, 00:11:48.854 "method": "bdev_raid_create", 00:11:48.854 "req_id": 1 00:11:48.854 } 00:11:48.854 Got JSON-RPC error response 00:11:48.854 response: 00:11:48.854 { 00:11:48.854 "code": -17, 00:11:48.854 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:48.854 } 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.854 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.855 [2024-11-27 11:50:15.122008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:48.855 [2024-11-27 11:50:15.122074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.855 [2024-11-27 11:50:15.122091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:48.855 [2024-11-27 11:50:15.122102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.855 [2024-11-27 11:50:15.124609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.855 [2024-11-27 11:50:15.124651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:48.855 [2024-11-27 11:50:15.124756] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:48.855 [2024-11-27 11:50:15.124871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:48.855 pt1 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.855 "name": "raid_bdev1", 00:11:48.855 "uuid": "51d80f8f-2683-4bc9-8167-2b036b34dd2f", 00:11:48.855 "strip_size_kb": 0, 00:11:48.855 "state": "configuring", 00:11:48.855 "raid_level": "raid1", 00:11:48.855 "superblock": true, 00:11:48.855 "num_base_bdevs": 4, 00:11:48.855 "num_base_bdevs_discovered": 1, 00:11:48.855 "num_base_bdevs_operational": 4, 00:11:48.855 "base_bdevs_list": [ 00:11:48.855 { 00:11:48.855 "name": "pt1", 00:11:48.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.855 "is_configured": true, 00:11:48.855 "data_offset": 2048, 00:11:48.855 "data_size": 63488 00:11:48.855 }, 00:11:48.855 { 00:11:48.855 "name": null, 00:11:48.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.855 "is_configured": false, 00:11:48.855 "data_offset": 2048, 00:11:48.855 "data_size": 63488 00:11:48.855 }, 00:11:48.855 { 00:11:48.855 "name": null, 00:11:48.855 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.855 "is_configured": false, 00:11:48.855 "data_offset": 2048, 00:11:48.855 "data_size": 63488 00:11:48.855 }, 00:11:48.855 { 00:11:48.855 "name": null, 00:11:48.855 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.855 "is_configured": false, 00:11:48.855 "data_offset": 2048, 00:11:48.855 "data_size": 63488 00:11:48.855 } 00:11:48.855 ] 00:11:48.855 }' 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.855 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.420 [2024-11-27 11:50:15.521382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:49.420 [2024-11-27 11:50:15.521462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.420 [2024-11-27 11:50:15.521485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:49.420 [2024-11-27 11:50:15.521496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.420 [2024-11-27 11:50:15.522012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.420 [2024-11-27 11:50:15.522050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:49.420 [2024-11-27 11:50:15.522146] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:49.420 [2024-11-27 11:50:15.522183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:49.420 pt2 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.420 [2024-11-27 11:50:15.529341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.420 "name": "raid_bdev1", 00:11:49.420 "uuid": "51d80f8f-2683-4bc9-8167-2b036b34dd2f", 00:11:49.420 "strip_size_kb": 0, 00:11:49.420 "state": "configuring", 00:11:49.420 "raid_level": "raid1", 00:11:49.420 "superblock": true, 00:11:49.420 "num_base_bdevs": 4, 00:11:49.420 "num_base_bdevs_discovered": 1, 00:11:49.420 "num_base_bdevs_operational": 4, 00:11:49.420 "base_bdevs_list": [ 00:11:49.420 { 00:11:49.420 "name": "pt1", 00:11:49.420 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:49.420 "is_configured": true, 00:11:49.420 "data_offset": 2048, 00:11:49.420 "data_size": 63488 00:11:49.420 }, 00:11:49.420 { 00:11:49.420 "name": null, 00:11:49.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.420 "is_configured": false, 00:11:49.420 "data_offset": 0, 00:11:49.420 "data_size": 63488 00:11:49.420 }, 00:11:49.420 { 00:11:49.420 "name": null, 00:11:49.420 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.420 "is_configured": false, 00:11:49.420 "data_offset": 2048, 00:11:49.420 "data_size": 63488 00:11:49.420 }, 00:11:49.420 { 00:11:49.420 "name": null, 00:11:49.420 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:49.420 "is_configured": false, 00:11:49.420 "data_offset": 2048, 00:11:49.420 "data_size": 63488 00:11:49.420 } 00:11:49.420 ] 00:11:49.420 }' 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.420 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.679 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:49.679 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:49.679 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:49.679 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.679 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.679 [2024-11-27 11:50:15.956639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:49.679 [2024-11-27 11:50:15.956710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.679 [2024-11-27 11:50:15.956754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:49.679 [2024-11-27 11:50:15.956768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.679 [2024-11-27 11:50:15.957339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.679 [2024-11-27 11:50:15.957376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:49.679 [2024-11-27 11:50:15.957477] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:49.679 [2024-11-27 11:50:15.957508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:49.679 pt2 00:11:49.679 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.679 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:49.679 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:49.679 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:49.679 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.679 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.679 [2024-11-27 11:50:15.968596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:49.679 [2024-11-27 11:50:15.968652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.679 [2024-11-27 11:50:15.968674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:49.679 [2024-11-27 11:50:15.968683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.679 [2024-11-27 11:50:15.969130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.679 [2024-11-27 11:50:15.969157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:49.679 [2024-11-27 11:50:15.969237] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:49.679 [2024-11-27 11:50:15.969265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:49.679 pt3 00:11:49.679 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.679 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:49.679 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:49.679 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:49.679 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.679 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.679 [2024-11-27 11:50:15.976549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:49.679 [2024-11-27 11:50:15.976598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.679 [2024-11-27 11:50:15.976627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:49.679 [2024-11-27 11:50:15.976652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.679 [2024-11-27 11:50:15.977059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.679 [2024-11-27 11:50:15.977085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:49.679 [2024-11-27 11:50:15.977154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:49.679 [2024-11-27 11:50:15.977199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:49.679 [2024-11-27 11:50:15.977357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:49.679 [2024-11-27 11:50:15.977371] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:49.679 [2024-11-27 11:50:15.977646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:49.679 [2024-11-27 11:50:15.977876] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:49.680 [2024-11-27 11:50:15.977894] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:49.680 [2024-11-27 11:50:15.978049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.680 pt4 00:11:49.680 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.680 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:49.680 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:49.680 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:49.680 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.680 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.680 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.680 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.680 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.680 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.680 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.680 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.680 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.680 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.680 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.680 11:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.680 11:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.680 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.680 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.680 "name": "raid_bdev1", 00:11:49.680 "uuid": "51d80f8f-2683-4bc9-8167-2b036b34dd2f", 00:11:49.680 "strip_size_kb": 0, 00:11:49.680 "state": "online", 00:11:49.680 "raid_level": "raid1", 00:11:49.680 "superblock": true, 00:11:49.680 "num_base_bdevs": 4, 00:11:49.680 "num_base_bdevs_discovered": 4, 00:11:49.680 "num_base_bdevs_operational": 4, 00:11:49.680 "base_bdevs_list": [ 00:11:49.680 { 00:11:49.680 "name": "pt1", 00:11:49.680 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:49.680 "is_configured": true, 00:11:49.680 "data_offset": 2048, 00:11:49.680 "data_size": 63488 00:11:49.680 }, 00:11:49.680 { 00:11:49.680 "name": "pt2", 00:11:49.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.680 "is_configured": true, 00:11:49.680 "data_offset": 2048, 00:11:49.680 "data_size": 63488 00:11:49.680 }, 00:11:49.680 { 00:11:49.680 "name": "pt3", 00:11:49.680 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.680 "is_configured": true, 00:11:49.680 "data_offset": 2048, 00:11:49.680 "data_size": 63488 00:11:49.680 }, 00:11:49.680 { 00:11:49.680 "name": "pt4", 00:11:49.680 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:49.680 "is_configured": true, 00:11:49.680 "data_offset": 2048, 00:11:49.680 "data_size": 63488 00:11:49.680 } 00:11:49.680 ] 00:11:49.680 }' 00:11:49.680 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.680 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:50.245 [2024-11-27 11:50:16.444241] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:50.245 "name": "raid_bdev1", 00:11:50.245 "aliases": [ 00:11:50.245 "51d80f8f-2683-4bc9-8167-2b036b34dd2f" 00:11:50.245 ], 00:11:50.245 "product_name": "Raid Volume", 00:11:50.245 "block_size": 512, 00:11:50.245 "num_blocks": 63488, 00:11:50.245 "uuid": "51d80f8f-2683-4bc9-8167-2b036b34dd2f", 00:11:50.245 "assigned_rate_limits": { 00:11:50.245 "rw_ios_per_sec": 0, 00:11:50.245 "rw_mbytes_per_sec": 0, 00:11:50.245 "r_mbytes_per_sec": 0, 00:11:50.245 "w_mbytes_per_sec": 0 00:11:50.245 }, 00:11:50.245 "claimed": false, 00:11:50.245 "zoned": false, 00:11:50.245 "supported_io_types": { 00:11:50.245 "read": true, 00:11:50.245 "write": true, 00:11:50.245 "unmap": false, 00:11:50.245 "flush": false, 00:11:50.245 "reset": true, 00:11:50.245 "nvme_admin": false, 00:11:50.245 "nvme_io": false, 00:11:50.245 "nvme_io_md": false, 00:11:50.245 "write_zeroes": true, 00:11:50.245 "zcopy": false, 00:11:50.245 "get_zone_info": false, 00:11:50.245 "zone_management": false, 00:11:50.245 "zone_append": false, 00:11:50.245 "compare": false, 00:11:50.245 "compare_and_write": false, 00:11:50.245 "abort": false, 00:11:50.245 "seek_hole": false, 00:11:50.245 "seek_data": false, 00:11:50.245 "copy": false, 00:11:50.245 "nvme_iov_md": false 00:11:50.245 }, 00:11:50.245 "memory_domains": [ 00:11:50.245 { 00:11:50.245 "dma_device_id": "system", 00:11:50.245 "dma_device_type": 1 00:11:50.245 }, 00:11:50.245 { 00:11:50.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.245 "dma_device_type": 2 00:11:50.245 }, 00:11:50.245 { 00:11:50.245 "dma_device_id": "system", 00:11:50.245 "dma_device_type": 1 00:11:50.245 }, 00:11:50.245 { 00:11:50.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.245 "dma_device_type": 2 00:11:50.245 }, 00:11:50.245 { 00:11:50.245 "dma_device_id": "system", 00:11:50.245 "dma_device_type": 1 00:11:50.245 }, 00:11:50.245 { 00:11:50.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.245 "dma_device_type": 2 00:11:50.245 }, 00:11:50.245 { 00:11:50.245 "dma_device_id": "system", 00:11:50.245 "dma_device_type": 1 00:11:50.245 }, 00:11:50.245 { 00:11:50.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.245 "dma_device_type": 2 00:11:50.245 } 00:11:50.245 ], 00:11:50.245 "driver_specific": { 00:11:50.245 "raid": { 00:11:50.245 "uuid": "51d80f8f-2683-4bc9-8167-2b036b34dd2f", 00:11:50.245 "strip_size_kb": 0, 00:11:50.245 "state": "online", 00:11:50.245 "raid_level": "raid1", 00:11:50.245 "superblock": true, 00:11:50.245 "num_base_bdevs": 4, 00:11:50.245 "num_base_bdevs_discovered": 4, 00:11:50.245 "num_base_bdevs_operational": 4, 00:11:50.245 "base_bdevs_list": [ 00:11:50.245 { 00:11:50.245 "name": "pt1", 00:11:50.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:50.245 "is_configured": true, 00:11:50.245 "data_offset": 2048, 00:11:50.245 "data_size": 63488 00:11:50.245 }, 00:11:50.245 { 00:11:50.245 "name": "pt2", 00:11:50.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.245 "is_configured": true, 00:11:50.245 "data_offset": 2048, 00:11:50.245 "data_size": 63488 00:11:50.245 }, 00:11:50.245 { 00:11:50.245 "name": "pt3", 00:11:50.245 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.245 "is_configured": true, 00:11:50.245 "data_offset": 2048, 00:11:50.245 "data_size": 63488 00:11:50.245 }, 00:11:50.245 { 00:11:50.245 "name": "pt4", 00:11:50.245 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:50.245 "is_configured": true, 00:11:50.245 "data_offset": 2048, 00:11:50.245 "data_size": 63488 00:11:50.245 } 00:11:50.245 ] 00:11:50.245 } 00:11:50.245 } 00:11:50.245 }' 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:50.245 pt2 00:11:50.245 pt3 00:11:50.245 pt4' 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.245 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:50.504 [2024-11-27 11:50:16.767653] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 51d80f8f-2683-4bc9-8167-2b036b34dd2f '!=' 51d80f8f-2683-4bc9-8167-2b036b34dd2f ']' 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.504 [2024-11-27 11:50:16.807369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.504 "name": "raid_bdev1", 00:11:50.504 "uuid": "51d80f8f-2683-4bc9-8167-2b036b34dd2f", 00:11:50.504 "strip_size_kb": 0, 00:11:50.504 "state": "online", 00:11:50.504 "raid_level": "raid1", 00:11:50.504 "superblock": true, 00:11:50.504 "num_base_bdevs": 4, 00:11:50.504 "num_base_bdevs_discovered": 3, 00:11:50.504 "num_base_bdevs_operational": 3, 00:11:50.504 "base_bdevs_list": [ 00:11:50.504 { 00:11:50.504 "name": null, 00:11:50.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.504 "is_configured": false, 00:11:50.504 "data_offset": 0, 00:11:50.504 "data_size": 63488 00:11:50.504 }, 00:11:50.504 { 00:11:50.504 "name": "pt2", 00:11:50.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.504 "is_configured": true, 00:11:50.504 "data_offset": 2048, 00:11:50.504 "data_size": 63488 00:11:50.504 }, 00:11:50.504 { 00:11:50.504 "name": "pt3", 00:11:50.504 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.504 "is_configured": true, 00:11:50.504 "data_offset": 2048, 00:11:50.504 "data_size": 63488 00:11:50.504 }, 00:11:50.504 { 00:11:50.504 "name": "pt4", 00:11:50.504 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:50.504 "is_configured": true, 00:11:50.504 "data_offset": 2048, 00:11:50.504 "data_size": 63488 00:11:50.504 } 00:11:50.504 ] 00:11:50.504 }' 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.504 11:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.115 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:51.115 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.115 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.115 [2024-11-27 11:50:17.262576] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:51.115 [2024-11-27 11:50:17.262655] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.115 [2024-11-27 11:50:17.262749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.115 [2024-11-27 11:50:17.262865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.115 [2024-11-27 11:50:17.262878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:51.115 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.115 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.115 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:51.115 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.115 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.115 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.115 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.116 [2024-11-27 11:50:17.354433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:51.116 [2024-11-27 11:50:17.354499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.116 [2024-11-27 11:50:17.354519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:51.116 [2024-11-27 11:50:17.354528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.116 [2024-11-27 11:50:17.356737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.116 [2024-11-27 11:50:17.356774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:51.116 [2024-11-27 11:50:17.356887] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:51.116 [2024-11-27 11:50:17.356938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:51.116 pt2 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.116 "name": "raid_bdev1", 00:11:51.116 "uuid": "51d80f8f-2683-4bc9-8167-2b036b34dd2f", 00:11:51.116 "strip_size_kb": 0, 00:11:51.116 "state": "configuring", 00:11:51.116 "raid_level": "raid1", 00:11:51.116 "superblock": true, 00:11:51.116 "num_base_bdevs": 4, 00:11:51.116 "num_base_bdevs_discovered": 1, 00:11:51.116 "num_base_bdevs_operational": 3, 00:11:51.116 "base_bdevs_list": [ 00:11:51.116 { 00:11:51.116 "name": null, 00:11:51.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.116 "is_configured": false, 00:11:51.116 "data_offset": 2048, 00:11:51.116 "data_size": 63488 00:11:51.116 }, 00:11:51.116 { 00:11:51.116 "name": "pt2", 00:11:51.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.116 "is_configured": true, 00:11:51.116 "data_offset": 2048, 00:11:51.116 "data_size": 63488 00:11:51.116 }, 00:11:51.116 { 00:11:51.116 "name": null, 00:11:51.116 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.116 "is_configured": false, 00:11:51.116 "data_offset": 2048, 00:11:51.116 "data_size": 63488 00:11:51.116 }, 00:11:51.116 { 00:11:51.116 "name": null, 00:11:51.116 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:51.116 "is_configured": false, 00:11:51.116 "data_offset": 2048, 00:11:51.116 "data_size": 63488 00:11:51.116 } 00:11:51.116 ] 00:11:51.116 }' 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.116 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.684 [2024-11-27 11:50:17.805644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:51.684 [2024-11-27 11:50:17.805713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.684 [2024-11-27 11:50:17.805737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:51.684 [2024-11-27 11:50:17.805746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.684 [2024-11-27 11:50:17.806249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.684 [2024-11-27 11:50:17.806284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:51.684 [2024-11-27 11:50:17.806379] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:51.684 [2024-11-27 11:50:17.806406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:51.684 pt3 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.684 "name": "raid_bdev1", 00:11:51.684 "uuid": "51d80f8f-2683-4bc9-8167-2b036b34dd2f", 00:11:51.684 "strip_size_kb": 0, 00:11:51.684 "state": "configuring", 00:11:51.684 "raid_level": "raid1", 00:11:51.684 "superblock": true, 00:11:51.684 "num_base_bdevs": 4, 00:11:51.684 "num_base_bdevs_discovered": 2, 00:11:51.684 "num_base_bdevs_operational": 3, 00:11:51.684 "base_bdevs_list": [ 00:11:51.684 { 00:11:51.684 "name": null, 00:11:51.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.684 "is_configured": false, 00:11:51.684 "data_offset": 2048, 00:11:51.684 "data_size": 63488 00:11:51.684 }, 00:11:51.684 { 00:11:51.684 "name": "pt2", 00:11:51.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.684 "is_configured": true, 00:11:51.684 "data_offset": 2048, 00:11:51.684 "data_size": 63488 00:11:51.684 }, 00:11:51.684 { 00:11:51.684 "name": "pt3", 00:11:51.684 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.684 "is_configured": true, 00:11:51.684 "data_offset": 2048, 00:11:51.684 "data_size": 63488 00:11:51.684 }, 00:11:51.684 { 00:11:51.684 "name": null, 00:11:51.684 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:51.684 "is_configured": false, 00:11:51.684 "data_offset": 2048, 00:11:51.684 "data_size": 63488 00:11:51.684 } 00:11:51.684 ] 00:11:51.684 }' 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.684 11:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.943 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:51.943 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:51.943 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:51.943 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.944 [2024-11-27 11:50:18.264910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:51.944 [2024-11-27 11:50:18.264985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.944 [2024-11-27 11:50:18.265014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:51.944 [2024-11-27 11:50:18.265023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.944 [2024-11-27 11:50:18.265486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.944 [2024-11-27 11:50:18.265504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:51.944 [2024-11-27 11:50:18.265612] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:51.944 [2024-11-27 11:50:18.265636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:51.944 [2024-11-27 11:50:18.265787] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:51.944 [2024-11-27 11:50:18.265797] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:51.944 [2024-11-27 11:50:18.266164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:51.944 [2024-11-27 11:50:18.266378] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:51.944 [2024-11-27 11:50:18.266430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:51.944 [2024-11-27 11:50:18.266639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.944 pt4 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.944 "name": "raid_bdev1", 00:11:51.944 "uuid": "51d80f8f-2683-4bc9-8167-2b036b34dd2f", 00:11:51.944 "strip_size_kb": 0, 00:11:51.944 "state": "online", 00:11:51.944 "raid_level": "raid1", 00:11:51.944 "superblock": true, 00:11:51.944 "num_base_bdevs": 4, 00:11:51.944 "num_base_bdevs_discovered": 3, 00:11:51.944 "num_base_bdevs_operational": 3, 00:11:51.944 "base_bdevs_list": [ 00:11:51.944 { 00:11:51.944 "name": null, 00:11:51.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.944 "is_configured": false, 00:11:51.944 "data_offset": 2048, 00:11:51.944 "data_size": 63488 00:11:51.944 }, 00:11:51.944 { 00:11:51.944 "name": "pt2", 00:11:51.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.944 "is_configured": true, 00:11:51.944 "data_offset": 2048, 00:11:51.944 "data_size": 63488 00:11:51.944 }, 00:11:51.944 { 00:11:51.944 "name": "pt3", 00:11:51.944 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.944 "is_configured": true, 00:11:51.944 "data_offset": 2048, 00:11:51.944 "data_size": 63488 00:11:51.944 }, 00:11:51.944 { 00:11:51.944 "name": "pt4", 00:11:51.944 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:51.944 "is_configured": true, 00:11:51.944 "data_offset": 2048, 00:11:51.944 "data_size": 63488 00:11:51.944 } 00:11:51.944 ] 00:11:51.944 }' 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.944 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.513 [2024-11-27 11:50:18.767967] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:52.513 [2024-11-27 11:50:18.767998] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.513 [2024-11-27 11:50:18.768081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.513 [2024-11-27 11:50:18.768160] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.513 [2024-11-27 11:50:18.768172] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.513 [2024-11-27 11:50:18.835972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:52.513 [2024-11-27 11:50:18.836078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.513 [2024-11-27 11:50:18.836117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:52.513 [2024-11-27 11:50:18.836155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.513 [2024-11-27 11:50:18.838361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.513 [2024-11-27 11:50:18.838437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:52.513 [2024-11-27 11:50:18.838563] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:52.513 [2024-11-27 11:50:18.838628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:52.513 [2024-11-27 11:50:18.838820] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:52.513 [2024-11-27 11:50:18.838903] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:52.513 [2024-11-27 11:50:18.838952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:52.513 [2024-11-27 11:50:18.839049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:52.513 [2024-11-27 11:50:18.839194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:52.513 pt1 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.513 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.513 "name": "raid_bdev1", 00:11:52.513 "uuid": "51d80f8f-2683-4bc9-8167-2b036b34dd2f", 00:11:52.513 "strip_size_kb": 0, 00:11:52.513 "state": "configuring", 00:11:52.513 "raid_level": "raid1", 00:11:52.513 "superblock": true, 00:11:52.513 "num_base_bdevs": 4, 00:11:52.513 "num_base_bdevs_discovered": 2, 00:11:52.513 "num_base_bdevs_operational": 3, 00:11:52.513 "base_bdevs_list": [ 00:11:52.513 { 00:11:52.513 "name": null, 00:11:52.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.513 "is_configured": false, 00:11:52.513 "data_offset": 2048, 00:11:52.513 "data_size": 63488 00:11:52.513 }, 00:11:52.513 { 00:11:52.513 "name": "pt2", 00:11:52.513 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:52.513 "is_configured": true, 00:11:52.513 "data_offset": 2048, 00:11:52.513 "data_size": 63488 00:11:52.513 }, 00:11:52.513 { 00:11:52.513 "name": "pt3", 00:11:52.513 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:52.513 "is_configured": true, 00:11:52.513 "data_offset": 2048, 00:11:52.513 "data_size": 63488 00:11:52.513 }, 00:11:52.513 { 00:11:52.513 "name": null, 00:11:52.513 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:52.513 "is_configured": false, 00:11:52.513 "data_offset": 2048, 00:11:52.513 "data_size": 63488 00:11:52.513 } 00:11:52.514 ] 00:11:52.514 }' 00:11:52.514 11:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.514 11:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.082 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:53.082 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.082 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.082 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:53.082 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.082 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:53.082 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:53.082 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.082 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.082 [2024-11-27 11:50:19.339162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:53.082 [2024-11-27 11:50:19.339291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.082 [2024-11-27 11:50:19.339334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:53.082 [2024-11-27 11:50:19.339366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.082 [2024-11-27 11:50:19.339888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.082 [2024-11-27 11:50:19.339953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:53.082 [2024-11-27 11:50:19.340053] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:53.082 [2024-11-27 11:50:19.340078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:53.082 [2024-11-27 11:50:19.340205] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:53.082 [2024-11-27 11:50:19.340213] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:53.082 [2024-11-27 11:50:19.340466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:53.082 [2024-11-27 11:50:19.340616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:53.083 [2024-11-27 11:50:19.340627] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:53.083 [2024-11-27 11:50:19.340762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.083 pt4 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.083 "name": "raid_bdev1", 00:11:53.083 "uuid": "51d80f8f-2683-4bc9-8167-2b036b34dd2f", 00:11:53.083 "strip_size_kb": 0, 00:11:53.083 "state": "online", 00:11:53.083 "raid_level": "raid1", 00:11:53.083 "superblock": true, 00:11:53.083 "num_base_bdevs": 4, 00:11:53.083 "num_base_bdevs_discovered": 3, 00:11:53.083 "num_base_bdevs_operational": 3, 00:11:53.083 "base_bdevs_list": [ 00:11:53.083 { 00:11:53.083 "name": null, 00:11:53.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.083 "is_configured": false, 00:11:53.083 "data_offset": 2048, 00:11:53.083 "data_size": 63488 00:11:53.083 }, 00:11:53.083 { 00:11:53.083 "name": "pt2", 00:11:53.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:53.083 "is_configured": true, 00:11:53.083 "data_offset": 2048, 00:11:53.083 "data_size": 63488 00:11:53.083 }, 00:11:53.083 { 00:11:53.083 "name": "pt3", 00:11:53.083 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:53.083 "is_configured": true, 00:11:53.083 "data_offset": 2048, 00:11:53.083 "data_size": 63488 00:11:53.083 }, 00:11:53.083 { 00:11:53.083 "name": "pt4", 00:11:53.083 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:53.083 "is_configured": true, 00:11:53.083 "data_offset": 2048, 00:11:53.083 "data_size": 63488 00:11:53.083 } 00:11:53.083 ] 00:11:53.083 }' 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.083 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.653 [2024-11-27 11:50:19.854557] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 51d80f8f-2683-4bc9-8167-2b036b34dd2f '!=' 51d80f8f-2683-4bc9-8167-2b036b34dd2f ']' 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74529 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74529 ']' 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74529 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74529 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74529' 00:11:53.653 killing process with pid 74529 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74529 00:11:53.653 11:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74529 00:11:53.653 [2024-11-27 11:50:19.911464] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.653 [2024-11-27 11:50:19.911594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.653 [2024-11-27 11:50:19.911726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.653 [2024-11-27 11:50:19.911781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:54.223 [2024-11-27 11:50:20.321926] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.163 11:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:55.163 00:11:55.163 real 0m8.495s 00:11:55.163 user 0m13.377s 00:11:55.163 sys 0m1.463s 00:11:55.163 11:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.163 11:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.163 ************************************ 00:11:55.163 END TEST raid_superblock_test 00:11:55.163 ************************************ 00:11:55.163 11:50:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:55.163 11:50:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:55.163 11:50:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.163 11:50:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.163 ************************************ 00:11:55.163 START TEST raid_read_error_test 00:11:55.163 ************************************ 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CQcH27l14w 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75016 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75016 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75016 ']' 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.163 11:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.423 [2024-11-27 11:50:21.618286] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:11:55.423 [2024-11-27 11:50:21.618480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75016 ] 00:11:55.423 [2024-11-27 11:50:21.790041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.682 [2024-11-27 11:50:21.904183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.942 [2024-11-27 11:50:22.102852] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.942 [2024-11-27 11:50:22.102882] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.201 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.202 BaseBdev1_malloc 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.202 true 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.202 [2024-11-27 11:50:22.518118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:56.202 [2024-11-27 11:50:22.518209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.202 [2024-11-27 11:50:22.518246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:56.202 [2024-11-27 11:50:22.518277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.202 [2024-11-27 11:50:22.520546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.202 [2024-11-27 11:50:22.520637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:56.202 BaseBdev1 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.202 BaseBdev2_malloc 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.202 true 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.202 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.202 [2024-11-27 11:50:22.582492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:56.202 [2024-11-27 11:50:22.582561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.202 [2024-11-27 11:50:22.582582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:56.202 [2024-11-27 11:50:22.582593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.462 [2024-11-27 11:50:22.585069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.462 [2024-11-27 11:50:22.585112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:56.462 BaseBdev2 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.462 BaseBdev3_malloc 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.462 true 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.462 [2024-11-27 11:50:22.660952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:56.462 [2024-11-27 11:50:22.661005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.462 [2024-11-27 11:50:22.661024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:56.462 [2024-11-27 11:50:22.661034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.462 [2024-11-27 11:50:22.663184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.462 [2024-11-27 11:50:22.663258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:56.462 BaseBdev3 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.462 BaseBdev4_malloc 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.462 true 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.462 [2024-11-27 11:50:22.722890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:56.462 [2024-11-27 11:50:22.722986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.462 [2024-11-27 11:50:22.723029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:56.462 [2024-11-27 11:50:22.723041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.462 [2024-11-27 11:50:22.725353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.462 [2024-11-27 11:50:22.725392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:56.462 BaseBdev4 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.462 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.462 [2024-11-27 11:50:22.734937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.462 [2024-11-27 11:50:22.737051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.462 [2024-11-27 11:50:22.737170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.462 [2024-11-27 11:50:22.737265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:56.462 [2024-11-27 11:50:22.737531] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:56.463 [2024-11-27 11:50:22.737581] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:56.463 [2024-11-27 11:50:22.737857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:56.463 [2024-11-27 11:50:22.738103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:56.463 [2024-11-27 11:50:22.738148] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:56.463 [2024-11-27 11:50:22.738369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.463 "name": "raid_bdev1", 00:11:56.463 "uuid": "df05ef63-4266-46dc-a87c-97489c482db5", 00:11:56.463 "strip_size_kb": 0, 00:11:56.463 "state": "online", 00:11:56.463 "raid_level": "raid1", 00:11:56.463 "superblock": true, 00:11:56.463 "num_base_bdevs": 4, 00:11:56.463 "num_base_bdevs_discovered": 4, 00:11:56.463 "num_base_bdevs_operational": 4, 00:11:56.463 "base_bdevs_list": [ 00:11:56.463 { 00:11:56.463 "name": "BaseBdev1", 00:11:56.463 "uuid": "05463364-2760-5a62-85fb-08eaa5fb125c", 00:11:56.463 "is_configured": true, 00:11:56.463 "data_offset": 2048, 00:11:56.463 "data_size": 63488 00:11:56.463 }, 00:11:56.463 { 00:11:56.463 "name": "BaseBdev2", 00:11:56.463 "uuid": "a39ede10-8a25-5389-9147-a27cf84b8719", 00:11:56.463 "is_configured": true, 00:11:56.463 "data_offset": 2048, 00:11:56.463 "data_size": 63488 00:11:56.463 }, 00:11:56.463 { 00:11:56.463 "name": "BaseBdev3", 00:11:56.463 "uuid": "111bdb42-944a-511e-92e6-585156f95e56", 00:11:56.463 "is_configured": true, 00:11:56.463 "data_offset": 2048, 00:11:56.463 "data_size": 63488 00:11:56.463 }, 00:11:56.463 { 00:11:56.463 "name": "BaseBdev4", 00:11:56.463 "uuid": "328338b3-40b7-5bde-8bd2-af691c464f0f", 00:11:56.463 "is_configured": true, 00:11:56.463 "data_offset": 2048, 00:11:56.463 "data_size": 63488 00:11:56.463 } 00:11:56.463 ] 00:11:56.463 }' 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.463 11:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.033 11:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:57.033 11:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:57.033 [2024-11-27 11:50:23.335012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.971 "name": "raid_bdev1", 00:11:57.971 "uuid": "df05ef63-4266-46dc-a87c-97489c482db5", 00:11:57.971 "strip_size_kb": 0, 00:11:57.971 "state": "online", 00:11:57.971 "raid_level": "raid1", 00:11:57.971 "superblock": true, 00:11:57.971 "num_base_bdevs": 4, 00:11:57.971 "num_base_bdevs_discovered": 4, 00:11:57.971 "num_base_bdevs_operational": 4, 00:11:57.971 "base_bdevs_list": [ 00:11:57.971 { 00:11:57.971 "name": "BaseBdev1", 00:11:57.971 "uuid": "05463364-2760-5a62-85fb-08eaa5fb125c", 00:11:57.971 "is_configured": true, 00:11:57.971 "data_offset": 2048, 00:11:57.971 "data_size": 63488 00:11:57.971 }, 00:11:57.971 { 00:11:57.971 "name": "BaseBdev2", 00:11:57.971 "uuid": "a39ede10-8a25-5389-9147-a27cf84b8719", 00:11:57.971 "is_configured": true, 00:11:57.971 "data_offset": 2048, 00:11:57.971 "data_size": 63488 00:11:57.971 }, 00:11:57.971 { 00:11:57.971 "name": "BaseBdev3", 00:11:57.971 "uuid": "111bdb42-944a-511e-92e6-585156f95e56", 00:11:57.971 "is_configured": true, 00:11:57.971 "data_offset": 2048, 00:11:57.971 "data_size": 63488 00:11:57.971 }, 00:11:57.971 { 00:11:57.971 "name": "BaseBdev4", 00:11:57.971 "uuid": "328338b3-40b7-5bde-8bd2-af691c464f0f", 00:11:57.971 "is_configured": true, 00:11:57.971 "data_offset": 2048, 00:11:57.971 "data_size": 63488 00:11:57.971 } 00:11:57.971 ] 00:11:57.971 }' 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.971 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.541 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:58.541 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.541 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.541 [2024-11-27 11:50:24.773607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.541 [2024-11-27 11:50:24.773715] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.541 [2024-11-27 11:50:24.777080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.541 [2024-11-27 11:50:24.777196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.541 [2024-11-27 11:50:24.777350] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.541 [2024-11-27 11:50:24.777405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:58.541 { 00:11:58.541 "results": [ 00:11:58.541 { 00:11:58.541 "job": "raid_bdev1", 00:11:58.541 "core_mask": "0x1", 00:11:58.541 "workload": "randrw", 00:11:58.541 "percentage": 50, 00:11:58.541 "status": "finished", 00:11:58.541 "queue_depth": 1, 00:11:58.541 "io_size": 131072, 00:11:58.541 "runtime": 1.4395, 00:11:58.541 "iops": 10197.98541160125, 00:11:58.541 "mibps": 1274.7481764501563, 00:11:58.541 "io_failed": 0, 00:11:58.541 "io_timeout": 0, 00:11:58.541 "avg_latency_us": 95.13649917304237, 00:11:58.541 "min_latency_us": 24.817467248908297, 00:11:58.541 "max_latency_us": 1602.6270742358079 00:11:58.541 } 00:11:58.541 ], 00:11:58.541 "core_count": 1 00:11:58.541 } 00:11:58.541 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.541 11:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75016 00:11:58.541 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75016 ']' 00:11:58.541 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75016 00:11:58.541 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:58.541 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.541 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75016 00:11:58.541 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.541 killing process with pid 75016 00:11:58.541 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.541 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75016' 00:11:58.541 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75016 00:11:58.541 [2024-11-27 11:50:24.821940] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:58.541 11:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75016 00:11:58.800 [2024-11-27 11:50:25.147918] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:00.179 11:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CQcH27l14w 00:12:00.179 11:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:00.179 11:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:00.179 11:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:00.179 11:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:00.179 11:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:00.179 11:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:00.179 11:50:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:00.179 00:12:00.179 real 0m4.840s 00:12:00.179 user 0m5.782s 00:12:00.179 sys 0m0.596s 00:12:00.179 11:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.179 ************************************ 00:12:00.179 11:50:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.179 END TEST raid_read_error_test 00:12:00.179 ************************************ 00:12:00.179 11:50:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:00.179 11:50:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:00.179 11:50:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.179 11:50:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:00.179 ************************************ 00:12:00.179 START TEST raid_write_error_test 00:12:00.179 ************************************ 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CS5ujftlvk 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75162 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75162 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75162 ']' 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.179 11:50:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.179 [2024-11-27 11:50:26.539919] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:12:00.179 [2024-11-27 11:50:26.540135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75162 ] 00:12:00.438 [2024-11-27 11:50:26.702288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.438 [2024-11-27 11:50:26.815691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.697 [2024-11-27 11:50:27.018593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.697 [2024-11-27 11:50:27.018685] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.262 BaseBdev1_malloc 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.262 true 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.262 [2024-11-27 11:50:27.433984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:01.262 [2024-11-27 11:50:27.434091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.262 [2024-11-27 11:50:27.434129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:01.262 [2024-11-27 11:50:27.434159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.262 [2024-11-27 11:50:27.436311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.262 [2024-11-27 11:50:27.436385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:01.262 BaseBdev1 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.262 BaseBdev2_malloc 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.262 true 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.262 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.263 [2024-11-27 11:50:27.497624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:01.263 [2024-11-27 11:50:27.497730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.263 [2024-11-27 11:50:27.497770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:01.263 [2024-11-27 11:50:27.497803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.263 [2024-11-27 11:50:27.500159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.263 [2024-11-27 11:50:27.500240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:01.263 BaseBdev2 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.263 BaseBdev3_malloc 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.263 true 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.263 [2024-11-27 11:50:27.571928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:01.263 [2024-11-27 11:50:27.572028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.263 [2024-11-27 11:50:27.572068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:01.263 [2024-11-27 11:50:27.572108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.263 [2024-11-27 11:50:27.574298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.263 [2024-11-27 11:50:27.574388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:01.263 BaseBdev3 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.263 BaseBdev4_malloc 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.263 true 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.263 [2024-11-27 11:50:27.633434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:01.263 [2024-11-27 11:50:27.633551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.263 [2024-11-27 11:50:27.633592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:01.263 [2024-11-27 11:50:27.633628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.263 [2024-11-27 11:50:27.635949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.263 [2024-11-27 11:50:27.636045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:01.263 BaseBdev4 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.263 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.263 [2024-11-27 11:50:27.641485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.263 [2024-11-27 11:50:27.643584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:01.263 [2024-11-27 11:50:27.643680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:01.263 [2024-11-27 11:50:27.643749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:01.263 [2024-11-27 11:50:27.644010] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:01.521 [2024-11-27 11:50:27.644087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:01.521 [2024-11-27 11:50:27.644375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:01.521 [2024-11-27 11:50:27.644582] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:01.521 [2024-11-27 11:50:27.644593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:01.521 [2024-11-27 11:50:27.644788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.521 "name": "raid_bdev1", 00:12:01.521 "uuid": "c4fdef50-d360-468f-af2b-b0f83bee7ee7", 00:12:01.521 "strip_size_kb": 0, 00:12:01.521 "state": "online", 00:12:01.521 "raid_level": "raid1", 00:12:01.521 "superblock": true, 00:12:01.521 "num_base_bdevs": 4, 00:12:01.521 "num_base_bdevs_discovered": 4, 00:12:01.521 "num_base_bdevs_operational": 4, 00:12:01.521 "base_bdevs_list": [ 00:12:01.521 { 00:12:01.521 "name": "BaseBdev1", 00:12:01.521 "uuid": "ef45f8b6-6f2b-56ea-b157-64f25c2bea63", 00:12:01.521 "is_configured": true, 00:12:01.521 "data_offset": 2048, 00:12:01.521 "data_size": 63488 00:12:01.521 }, 00:12:01.521 { 00:12:01.521 "name": "BaseBdev2", 00:12:01.521 "uuid": "faa05c0c-30db-5f95-a227-c86951a4396f", 00:12:01.521 "is_configured": true, 00:12:01.521 "data_offset": 2048, 00:12:01.521 "data_size": 63488 00:12:01.521 }, 00:12:01.521 { 00:12:01.521 "name": "BaseBdev3", 00:12:01.521 "uuid": "27f8d673-d314-5ed7-ae66-9c44cd58f2a6", 00:12:01.521 "is_configured": true, 00:12:01.521 "data_offset": 2048, 00:12:01.521 "data_size": 63488 00:12:01.521 }, 00:12:01.521 { 00:12:01.521 "name": "BaseBdev4", 00:12:01.521 "uuid": "5c2e2dea-fe04-536c-a12a-5468b46ddd9b", 00:12:01.521 "is_configured": true, 00:12:01.521 "data_offset": 2048, 00:12:01.521 "data_size": 63488 00:12:01.521 } 00:12:01.521 ] 00:12:01.521 }' 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.521 11:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.780 11:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:01.780 11:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:01.780 [2024-11-27 11:50:28.134009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.718 [2024-11-27 11:50:29.041661] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:02.718 [2024-11-27 11:50:29.041724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:02.718 [2024-11-27 11:50:29.041957] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.718 "name": "raid_bdev1", 00:12:02.718 "uuid": "c4fdef50-d360-468f-af2b-b0f83bee7ee7", 00:12:02.718 "strip_size_kb": 0, 00:12:02.718 "state": "online", 00:12:02.718 "raid_level": "raid1", 00:12:02.718 "superblock": true, 00:12:02.718 "num_base_bdevs": 4, 00:12:02.718 "num_base_bdevs_discovered": 3, 00:12:02.718 "num_base_bdevs_operational": 3, 00:12:02.718 "base_bdevs_list": [ 00:12:02.718 { 00:12:02.718 "name": null, 00:12:02.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.718 "is_configured": false, 00:12:02.718 "data_offset": 0, 00:12:02.718 "data_size": 63488 00:12:02.718 }, 00:12:02.718 { 00:12:02.718 "name": "BaseBdev2", 00:12:02.718 "uuid": "faa05c0c-30db-5f95-a227-c86951a4396f", 00:12:02.718 "is_configured": true, 00:12:02.718 "data_offset": 2048, 00:12:02.718 "data_size": 63488 00:12:02.718 }, 00:12:02.718 { 00:12:02.718 "name": "BaseBdev3", 00:12:02.718 "uuid": "27f8d673-d314-5ed7-ae66-9c44cd58f2a6", 00:12:02.718 "is_configured": true, 00:12:02.718 "data_offset": 2048, 00:12:02.718 "data_size": 63488 00:12:02.718 }, 00:12:02.718 { 00:12:02.718 "name": "BaseBdev4", 00:12:02.718 "uuid": "5c2e2dea-fe04-536c-a12a-5468b46ddd9b", 00:12:02.718 "is_configured": true, 00:12:02.718 "data_offset": 2048, 00:12:02.718 "data_size": 63488 00:12:02.718 } 00:12:02.718 ] 00:12:02.718 }' 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.718 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.293 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:03.293 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.293 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.293 [2024-11-27 11:50:29.501435] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.293 [2024-11-27 11:50:29.501467] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.293 [2024-11-27 11:50:29.504399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.293 [2024-11-27 11:50:29.504502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.293 [2024-11-27 11:50:29.504634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.293 [2024-11-27 11:50:29.504649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:03.293 { 00:12:03.293 "results": [ 00:12:03.293 { 00:12:03.293 "job": "raid_bdev1", 00:12:03.293 "core_mask": "0x1", 00:12:03.293 "workload": "randrw", 00:12:03.293 "percentage": 50, 00:12:03.293 "status": "finished", 00:12:03.293 "queue_depth": 1, 00:12:03.293 "io_size": 131072, 00:12:03.293 "runtime": 1.368235, 00:12:03.293 "iops": 10572.745179007992, 00:12:03.293 "mibps": 1321.593147375999, 00:12:03.293 "io_failed": 0, 00:12:03.293 "io_timeout": 0, 00:12:03.293 "avg_latency_us": 91.58068991165551, 00:12:03.293 "min_latency_us": 24.482096069868994, 00:12:03.293 "max_latency_us": 1752.8733624454148 00:12:03.293 } 00:12:03.293 ], 00:12:03.293 "core_count": 1 00:12:03.293 } 00:12:03.293 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.293 11:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75162 00:12:03.293 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75162 ']' 00:12:03.293 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75162 00:12:03.293 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:03.293 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.293 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75162 00:12:03.293 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.293 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.293 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75162' 00:12:03.293 killing process with pid 75162 00:12:03.293 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75162 00:12:03.293 11:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75162 00:12:03.293 [2024-11-27 11:50:29.546744] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:03.551 [2024-11-27 11:50:29.899175] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:04.931 11:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CS5ujftlvk 00:12:04.931 11:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:04.931 11:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:04.931 11:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:04.931 11:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:04.931 ************************************ 00:12:04.931 END TEST raid_write_error_test 00:12:04.931 ************************************ 00:12:04.931 11:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:04.931 11:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:04.931 11:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:04.931 00:12:04.931 real 0m4.708s 00:12:04.931 user 0m5.524s 00:12:04.931 sys 0m0.546s 00:12:04.931 11:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.931 11:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.931 11:50:31 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:04.931 11:50:31 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:04.931 11:50:31 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:04.931 11:50:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:04.931 11:50:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.931 11:50:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:04.931 ************************************ 00:12:04.931 START TEST raid_rebuild_test 00:12:04.931 ************************************ 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75300 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75300 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75300 ']' 00:12:04.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.931 11:50:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.191 [2024-11-27 11:50:31.315542] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:12:05.191 [2024-11-27 11:50:31.315771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75300 ] 00:12:05.191 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:05.191 Zero copy mechanism will not be used. 00:12:05.191 [2024-11-27 11:50:31.493019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.450 [2024-11-27 11:50:31.608736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.450 [2024-11-27 11:50:31.821051] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.450 [2024-11-27 11:50:31.821200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.018 BaseBdev1_malloc 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.018 [2024-11-27 11:50:32.212727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:06.018 [2024-11-27 11:50:32.212796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.018 [2024-11-27 11:50:32.212819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:06.018 [2024-11-27 11:50:32.212831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.018 [2024-11-27 11:50:32.215008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.018 [2024-11-27 11:50:32.215053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:06.018 BaseBdev1 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.018 BaseBdev2_malloc 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.018 [2024-11-27 11:50:32.267830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:06.018 [2024-11-27 11:50:32.267918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.018 [2024-11-27 11:50:32.267948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:06.018 [2024-11-27 11:50:32.267961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.018 [2024-11-27 11:50:32.270193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.018 [2024-11-27 11:50:32.270294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:06.018 BaseBdev2 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.018 spare_malloc 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.018 spare_delay 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.018 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.018 [2024-11-27 11:50:32.354677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:06.018 [2024-11-27 11:50:32.354748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.018 [2024-11-27 11:50:32.354771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:06.018 [2024-11-27 11:50:32.354784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.019 [2024-11-27 11:50:32.357286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.019 [2024-11-27 11:50:32.357331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:06.019 spare 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.019 [2024-11-27 11:50:32.366725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.019 [2024-11-27 11:50:32.368803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.019 [2024-11-27 11:50:32.368918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:06.019 [2024-11-27 11:50:32.368934] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:06.019 [2024-11-27 11:50:32.369228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:06.019 [2024-11-27 11:50:32.369418] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:06.019 [2024-11-27 11:50:32.369431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:06.019 [2024-11-27 11:50:32.369601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.019 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.278 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.278 "name": "raid_bdev1", 00:12:06.278 "uuid": "ebc48ae6-3842-468e-85fa-febaf863f842", 00:12:06.278 "strip_size_kb": 0, 00:12:06.278 "state": "online", 00:12:06.278 "raid_level": "raid1", 00:12:06.278 "superblock": false, 00:12:06.278 "num_base_bdevs": 2, 00:12:06.278 "num_base_bdevs_discovered": 2, 00:12:06.278 "num_base_bdevs_operational": 2, 00:12:06.278 "base_bdevs_list": [ 00:12:06.278 { 00:12:06.278 "name": "BaseBdev1", 00:12:06.278 "uuid": "d2c61500-fa45-588b-aa7c-ce7af8d68028", 00:12:06.278 "is_configured": true, 00:12:06.278 "data_offset": 0, 00:12:06.278 "data_size": 65536 00:12:06.278 }, 00:12:06.278 { 00:12:06.278 "name": "BaseBdev2", 00:12:06.278 "uuid": "edb47612-c4c7-5d74-a79f-e32c6f315d54", 00:12:06.278 "is_configured": true, 00:12:06.278 "data_offset": 0, 00:12:06.278 "data_size": 65536 00:12:06.278 } 00:12:06.278 ] 00:12:06.278 }' 00:12:06.278 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.278 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.538 [2024-11-27 11:50:32.818271] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:06.538 11:50:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:06.798 [2024-11-27 11:50:33.093572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:06.798 /dev/nbd0 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.798 1+0 records in 00:12:06.798 1+0 records out 00:12:06.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635081 s, 6.4 MB/s 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:06.798 11:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:12.071 65536+0 records in 00:12:12.071 65536+0 records out 00:12:12.071 33554432 bytes (34 MB, 32 MiB) copied, 4.27863 s, 7.8 MB/s 00:12:12.071 11:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:12.071 11:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:12.072 [2024-11-27 11:50:37.641759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.072 [2024-11-27 11:50:37.674007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.072 "name": "raid_bdev1", 00:12:12.072 "uuid": "ebc48ae6-3842-468e-85fa-febaf863f842", 00:12:12.072 "strip_size_kb": 0, 00:12:12.072 "state": "online", 00:12:12.072 "raid_level": "raid1", 00:12:12.072 "superblock": false, 00:12:12.072 "num_base_bdevs": 2, 00:12:12.072 "num_base_bdevs_discovered": 1, 00:12:12.072 "num_base_bdevs_operational": 1, 00:12:12.072 "base_bdevs_list": [ 00:12:12.072 { 00:12:12.072 "name": null, 00:12:12.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.072 "is_configured": false, 00:12:12.072 "data_offset": 0, 00:12:12.072 "data_size": 65536 00:12:12.072 }, 00:12:12.072 { 00:12:12.072 "name": "BaseBdev2", 00:12:12.072 "uuid": "edb47612-c4c7-5d74-a79f-e32c6f315d54", 00:12:12.072 "is_configured": true, 00:12:12.072 "data_offset": 0, 00:12:12.072 "data_size": 65536 00:12:12.072 } 00:12:12.072 ] 00:12:12.072 }' 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.072 11:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.072 11:50:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:12.072 11:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.072 11:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.072 [2024-11-27 11:50:38.165278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:12.072 [2024-11-27 11:50:38.182479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:12.072 11:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.072 11:50:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:12.072 [2024-11-27 11:50:38.184774] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:13.010 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.010 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.010 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.010 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.010 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.010 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.010 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.010 11:50:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.010 11:50:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.010 11:50:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.010 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.010 "name": "raid_bdev1", 00:12:13.010 "uuid": "ebc48ae6-3842-468e-85fa-febaf863f842", 00:12:13.010 "strip_size_kb": 0, 00:12:13.010 "state": "online", 00:12:13.010 "raid_level": "raid1", 00:12:13.010 "superblock": false, 00:12:13.010 "num_base_bdevs": 2, 00:12:13.010 "num_base_bdevs_discovered": 2, 00:12:13.010 "num_base_bdevs_operational": 2, 00:12:13.010 "process": { 00:12:13.010 "type": "rebuild", 00:12:13.010 "target": "spare", 00:12:13.010 "progress": { 00:12:13.010 "blocks": 20480, 00:12:13.010 "percent": 31 00:12:13.010 } 00:12:13.010 }, 00:12:13.010 "base_bdevs_list": [ 00:12:13.010 { 00:12:13.010 "name": "spare", 00:12:13.010 "uuid": "d7f8344e-c25f-5052-847d-26eb78ebf3c3", 00:12:13.010 "is_configured": true, 00:12:13.010 "data_offset": 0, 00:12:13.010 "data_size": 65536 00:12:13.010 }, 00:12:13.010 { 00:12:13.010 "name": "BaseBdev2", 00:12:13.010 "uuid": "edb47612-c4c7-5d74-a79f-e32c6f315d54", 00:12:13.010 "is_configured": true, 00:12:13.010 "data_offset": 0, 00:12:13.010 "data_size": 65536 00:12:13.010 } 00:12:13.010 ] 00:12:13.010 }' 00:12:13.010 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.010 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.010 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.010 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.010 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:13.010 11:50:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.010 11:50:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.010 [2024-11-27 11:50:39.347809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:13.010 [2024-11-27 11:50:39.390934] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:13.010 [2024-11-27 11:50:39.391092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.010 [2024-11-27 11:50:39.391161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:13.010 [2024-11-27 11:50:39.391196] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.270 "name": "raid_bdev1", 00:12:13.270 "uuid": "ebc48ae6-3842-468e-85fa-febaf863f842", 00:12:13.270 "strip_size_kb": 0, 00:12:13.270 "state": "online", 00:12:13.270 "raid_level": "raid1", 00:12:13.270 "superblock": false, 00:12:13.270 "num_base_bdevs": 2, 00:12:13.270 "num_base_bdevs_discovered": 1, 00:12:13.270 "num_base_bdevs_operational": 1, 00:12:13.270 "base_bdevs_list": [ 00:12:13.270 { 00:12:13.270 "name": null, 00:12:13.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.270 "is_configured": false, 00:12:13.270 "data_offset": 0, 00:12:13.270 "data_size": 65536 00:12:13.270 }, 00:12:13.270 { 00:12:13.270 "name": "BaseBdev2", 00:12:13.270 "uuid": "edb47612-c4c7-5d74-a79f-e32c6f315d54", 00:12:13.270 "is_configured": true, 00:12:13.270 "data_offset": 0, 00:12:13.270 "data_size": 65536 00:12:13.270 } 00:12:13.270 ] 00:12:13.270 }' 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.270 11:50:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.531 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:13.531 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.531 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:13.531 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:13.531 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.531 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.531 11:50:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.531 11:50:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.531 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.531 11:50:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.531 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.531 "name": "raid_bdev1", 00:12:13.531 "uuid": "ebc48ae6-3842-468e-85fa-febaf863f842", 00:12:13.531 "strip_size_kb": 0, 00:12:13.531 "state": "online", 00:12:13.531 "raid_level": "raid1", 00:12:13.531 "superblock": false, 00:12:13.531 "num_base_bdevs": 2, 00:12:13.531 "num_base_bdevs_discovered": 1, 00:12:13.531 "num_base_bdevs_operational": 1, 00:12:13.531 "base_bdevs_list": [ 00:12:13.531 { 00:12:13.531 "name": null, 00:12:13.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.531 "is_configured": false, 00:12:13.531 "data_offset": 0, 00:12:13.531 "data_size": 65536 00:12:13.531 }, 00:12:13.531 { 00:12:13.531 "name": "BaseBdev2", 00:12:13.531 "uuid": "edb47612-c4c7-5d74-a79f-e32c6f315d54", 00:12:13.531 "is_configured": true, 00:12:13.531 "data_offset": 0, 00:12:13.531 "data_size": 65536 00:12:13.531 } 00:12:13.531 ] 00:12:13.531 }' 00:12:13.531 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.791 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:13.791 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.791 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:13.791 11:50:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:13.791 11:50:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.791 11:50:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.791 [2024-11-27 11:50:39.987812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:13.791 [2024-11-27 11:50:40.004444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:13.791 11:50:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.791 11:50:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:13.791 [2024-11-27 11:50:40.006292] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:14.729 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.729 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.729 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.729 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.729 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.729 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.729 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.729 11:50:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.729 11:50:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.729 11:50:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.729 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.729 "name": "raid_bdev1", 00:12:14.729 "uuid": "ebc48ae6-3842-468e-85fa-febaf863f842", 00:12:14.729 "strip_size_kb": 0, 00:12:14.730 "state": "online", 00:12:14.730 "raid_level": "raid1", 00:12:14.730 "superblock": false, 00:12:14.730 "num_base_bdevs": 2, 00:12:14.730 "num_base_bdevs_discovered": 2, 00:12:14.730 "num_base_bdevs_operational": 2, 00:12:14.730 "process": { 00:12:14.730 "type": "rebuild", 00:12:14.730 "target": "spare", 00:12:14.730 "progress": { 00:12:14.730 "blocks": 20480, 00:12:14.730 "percent": 31 00:12:14.730 } 00:12:14.730 }, 00:12:14.730 "base_bdevs_list": [ 00:12:14.730 { 00:12:14.730 "name": "spare", 00:12:14.730 "uuid": "d7f8344e-c25f-5052-847d-26eb78ebf3c3", 00:12:14.730 "is_configured": true, 00:12:14.730 "data_offset": 0, 00:12:14.730 "data_size": 65536 00:12:14.730 }, 00:12:14.730 { 00:12:14.730 "name": "BaseBdev2", 00:12:14.730 "uuid": "edb47612-c4c7-5d74-a79f-e32c6f315d54", 00:12:14.730 "is_configured": true, 00:12:14.730 "data_offset": 0, 00:12:14.730 "data_size": 65536 00:12:14.730 } 00:12:14.730 ] 00:12:14.730 }' 00:12:14.730 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.730 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.730 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.988 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.988 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:14.988 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:14.988 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:14.988 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:14.988 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=376 00:12:14.988 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:14.988 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.988 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.988 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.988 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.988 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.988 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.988 11:50:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.988 11:50:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.988 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.988 11:50:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.988 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.988 "name": "raid_bdev1", 00:12:14.988 "uuid": "ebc48ae6-3842-468e-85fa-febaf863f842", 00:12:14.988 "strip_size_kb": 0, 00:12:14.988 "state": "online", 00:12:14.988 "raid_level": "raid1", 00:12:14.988 "superblock": false, 00:12:14.988 "num_base_bdevs": 2, 00:12:14.988 "num_base_bdevs_discovered": 2, 00:12:14.988 "num_base_bdevs_operational": 2, 00:12:14.988 "process": { 00:12:14.988 "type": "rebuild", 00:12:14.988 "target": "spare", 00:12:14.988 "progress": { 00:12:14.988 "blocks": 22528, 00:12:14.988 "percent": 34 00:12:14.988 } 00:12:14.988 }, 00:12:14.988 "base_bdevs_list": [ 00:12:14.988 { 00:12:14.988 "name": "spare", 00:12:14.988 "uuid": "d7f8344e-c25f-5052-847d-26eb78ebf3c3", 00:12:14.988 "is_configured": true, 00:12:14.988 "data_offset": 0, 00:12:14.988 "data_size": 65536 00:12:14.988 }, 00:12:14.988 { 00:12:14.988 "name": "BaseBdev2", 00:12:14.988 "uuid": "edb47612-c4c7-5d74-a79f-e32c6f315d54", 00:12:14.988 "is_configured": true, 00:12:14.988 "data_offset": 0, 00:12:14.988 "data_size": 65536 00:12:14.988 } 00:12:14.988 ] 00:12:14.989 }' 00:12:14.989 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.989 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.989 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.989 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.989 11:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:15.923 11:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:15.923 11:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.923 11:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.923 11:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.923 11:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.923 11:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.923 11:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.923 11:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.923 11:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.923 11:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.923 11:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.923 11:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.923 "name": "raid_bdev1", 00:12:15.923 "uuid": "ebc48ae6-3842-468e-85fa-febaf863f842", 00:12:15.923 "strip_size_kb": 0, 00:12:15.923 "state": "online", 00:12:15.923 "raid_level": "raid1", 00:12:15.923 "superblock": false, 00:12:15.923 "num_base_bdevs": 2, 00:12:15.923 "num_base_bdevs_discovered": 2, 00:12:15.923 "num_base_bdevs_operational": 2, 00:12:15.924 "process": { 00:12:15.924 "type": "rebuild", 00:12:15.924 "target": "spare", 00:12:15.924 "progress": { 00:12:15.924 "blocks": 45056, 00:12:15.924 "percent": 68 00:12:15.924 } 00:12:15.924 }, 00:12:15.924 "base_bdevs_list": [ 00:12:15.924 { 00:12:15.924 "name": "spare", 00:12:15.924 "uuid": "d7f8344e-c25f-5052-847d-26eb78ebf3c3", 00:12:15.924 "is_configured": true, 00:12:15.924 "data_offset": 0, 00:12:15.924 "data_size": 65536 00:12:15.924 }, 00:12:15.924 { 00:12:15.924 "name": "BaseBdev2", 00:12:15.924 "uuid": "edb47612-c4c7-5d74-a79f-e32c6f315d54", 00:12:15.924 "is_configured": true, 00:12:15.924 "data_offset": 0, 00:12:15.924 "data_size": 65536 00:12:15.924 } 00:12:15.924 ] 00:12:15.924 }' 00:12:15.924 11:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.183 11:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:16.183 11:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.183 11:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:16.183 11:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:17.121 [2024-11-27 11:50:43.221622] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:17.121 [2024-11-27 11:50:43.221717] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:17.121 [2024-11-27 11:50:43.221769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.121 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:17.121 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:17.121 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.121 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:17.121 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:17.121 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.121 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.121 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.121 11:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.121 11:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.121 11:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.121 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.121 "name": "raid_bdev1", 00:12:17.121 "uuid": "ebc48ae6-3842-468e-85fa-febaf863f842", 00:12:17.121 "strip_size_kb": 0, 00:12:17.121 "state": "online", 00:12:17.121 "raid_level": "raid1", 00:12:17.121 "superblock": false, 00:12:17.121 "num_base_bdevs": 2, 00:12:17.121 "num_base_bdevs_discovered": 2, 00:12:17.121 "num_base_bdevs_operational": 2, 00:12:17.121 "base_bdevs_list": [ 00:12:17.121 { 00:12:17.121 "name": "spare", 00:12:17.121 "uuid": "d7f8344e-c25f-5052-847d-26eb78ebf3c3", 00:12:17.121 "is_configured": true, 00:12:17.121 "data_offset": 0, 00:12:17.121 "data_size": 65536 00:12:17.121 }, 00:12:17.121 { 00:12:17.121 "name": "BaseBdev2", 00:12:17.121 "uuid": "edb47612-c4c7-5d74-a79f-e32c6f315d54", 00:12:17.121 "is_configured": true, 00:12:17.121 "data_offset": 0, 00:12:17.121 "data_size": 65536 00:12:17.121 } 00:12:17.121 ] 00:12:17.121 }' 00:12:17.121 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.121 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:17.121 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.381 "name": "raid_bdev1", 00:12:17.381 "uuid": "ebc48ae6-3842-468e-85fa-febaf863f842", 00:12:17.381 "strip_size_kb": 0, 00:12:17.381 "state": "online", 00:12:17.381 "raid_level": "raid1", 00:12:17.381 "superblock": false, 00:12:17.381 "num_base_bdevs": 2, 00:12:17.381 "num_base_bdevs_discovered": 2, 00:12:17.381 "num_base_bdevs_operational": 2, 00:12:17.381 "base_bdevs_list": [ 00:12:17.381 { 00:12:17.381 "name": "spare", 00:12:17.381 "uuid": "d7f8344e-c25f-5052-847d-26eb78ebf3c3", 00:12:17.381 "is_configured": true, 00:12:17.381 "data_offset": 0, 00:12:17.381 "data_size": 65536 00:12:17.381 }, 00:12:17.381 { 00:12:17.381 "name": "BaseBdev2", 00:12:17.381 "uuid": "edb47612-c4c7-5d74-a79f-e32c6f315d54", 00:12:17.381 "is_configured": true, 00:12:17.381 "data_offset": 0, 00:12:17.381 "data_size": 65536 00:12:17.381 } 00:12:17.381 ] 00:12:17.381 }' 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.381 "name": "raid_bdev1", 00:12:17.381 "uuid": "ebc48ae6-3842-468e-85fa-febaf863f842", 00:12:17.381 "strip_size_kb": 0, 00:12:17.381 "state": "online", 00:12:17.381 "raid_level": "raid1", 00:12:17.381 "superblock": false, 00:12:17.381 "num_base_bdevs": 2, 00:12:17.381 "num_base_bdevs_discovered": 2, 00:12:17.381 "num_base_bdevs_operational": 2, 00:12:17.381 "base_bdevs_list": [ 00:12:17.381 { 00:12:17.381 "name": "spare", 00:12:17.381 "uuid": "d7f8344e-c25f-5052-847d-26eb78ebf3c3", 00:12:17.381 "is_configured": true, 00:12:17.381 "data_offset": 0, 00:12:17.381 "data_size": 65536 00:12:17.381 }, 00:12:17.381 { 00:12:17.381 "name": "BaseBdev2", 00:12:17.381 "uuid": "edb47612-c4c7-5d74-a79f-e32c6f315d54", 00:12:17.381 "is_configured": true, 00:12:17.381 "data_offset": 0, 00:12:17.381 "data_size": 65536 00:12:17.381 } 00:12:17.381 ] 00:12:17.381 }' 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.381 11:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.952 [2024-11-27 11:50:44.107976] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:17.952 [2024-11-27 11:50:44.108015] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.952 [2024-11-27 11:50:44.108109] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.952 [2024-11-27 11:50:44.108183] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.952 [2024-11-27 11:50:44.108200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:17.952 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:18.212 /dev/nbd0 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.212 1+0 records in 00:12:18.212 1+0 records out 00:12:18.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405976 s, 10.1 MB/s 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:18.212 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:18.472 /dev/nbd1 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.472 1+0 records in 00:12:18.472 1+0 records out 00:12:18.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510135 s, 8.0 MB/s 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:18.472 11:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:18.732 11:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:18.732 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.732 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:18.732 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:18.732 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:18.732 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.732 11:50:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:18.732 11:50:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:18.732 11:50:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:18.732 11:50:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:18.732 11:50:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.732 11:50:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.732 11:50:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:18.732 11:50:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:18.732 11:50:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.732 11:50:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.732 11:50:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75300 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75300 ']' 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75300 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75300 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.992 killing process with pid 75300 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75300' 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75300 00:12:18.992 Received shutdown signal, test time was about 60.000000 seconds 00:12:18.992 00:12:18.992 Latency(us) 00:12:18.992 [2024-11-27T11:50:45.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.992 [2024-11-27T11:50:45.377Z] =================================================================================================================== 00:12:18.992 [2024-11-27T11:50:45.377Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:18.992 [2024-11-27 11:50:45.333886] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.992 11:50:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75300 00:12:19.561 [2024-11-27 11:50:45.639104] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:20.599 00:12:20.599 real 0m15.581s 00:12:20.599 user 0m17.716s 00:12:20.599 sys 0m2.979s 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.599 ************************************ 00:12:20.599 END TEST raid_rebuild_test 00:12:20.599 ************************************ 00:12:20.599 11:50:46 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:20.599 11:50:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:20.599 11:50:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.599 11:50:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:20.599 ************************************ 00:12:20.599 START TEST raid_rebuild_test_sb 00:12:20.599 ************************************ 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75724 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75724 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75724 ']' 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.599 11:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.599 [2024-11-27 11:50:46.965582] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:12:20.599 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:20.599 Zero copy mechanism will not be used. 00:12:20.599 [2024-11-27 11:50:46.965715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75724 ] 00:12:20.858 [2024-11-27 11:50:47.144098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.117 [2024-11-27 11:50:47.260908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.117 [2024-11-27 11:50:47.464690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.117 [2024-11-27 11:50:47.464779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.684 BaseBdev1_malloc 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.684 [2024-11-27 11:50:47.866974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:21.684 [2024-11-27 11:50:47.867043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.684 [2024-11-27 11:50:47.867071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:21.684 [2024-11-27 11:50:47.867084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.684 [2024-11-27 11:50:47.869424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.684 [2024-11-27 11:50:47.869467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:21.684 BaseBdev1 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.684 BaseBdev2_malloc 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.684 [2024-11-27 11:50:47.923358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:21.684 [2024-11-27 11:50:47.923439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.684 [2024-11-27 11:50:47.923464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:21.684 [2024-11-27 11:50:47.923476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.684 [2024-11-27 11:50:47.925779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.684 [2024-11-27 11:50:47.925822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:21.684 BaseBdev2 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.684 spare_malloc 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.684 spare_delay 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:21.684 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.685 11:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.685 [2024-11-27 11:50:48.002573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:21.685 [2024-11-27 11:50:48.002651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.685 [2024-11-27 11:50:48.002677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:21.685 [2024-11-27 11:50:48.002690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.685 [2024-11-27 11:50:48.005266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.685 [2024-11-27 11:50:48.005313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:21.685 spare 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.685 [2024-11-27 11:50:48.014604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.685 [2024-11-27 11:50:48.016691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.685 [2024-11-27 11:50:48.016923] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:21.685 [2024-11-27 11:50:48.016950] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:21.685 [2024-11-27 11:50:48.017251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:21.685 [2024-11-27 11:50:48.017435] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:21.685 [2024-11-27 11:50:48.017452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:21.685 [2024-11-27 11:50:48.017676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.685 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.944 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.944 "name": "raid_bdev1", 00:12:21.944 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:21.944 "strip_size_kb": 0, 00:12:21.944 "state": "online", 00:12:21.944 "raid_level": "raid1", 00:12:21.944 "superblock": true, 00:12:21.944 "num_base_bdevs": 2, 00:12:21.944 "num_base_bdevs_discovered": 2, 00:12:21.944 "num_base_bdevs_operational": 2, 00:12:21.944 "base_bdevs_list": [ 00:12:21.944 { 00:12:21.944 "name": "BaseBdev1", 00:12:21.944 "uuid": "561ef35a-f977-5f98-9dbb-7aa2c6484228", 00:12:21.944 "is_configured": true, 00:12:21.944 "data_offset": 2048, 00:12:21.944 "data_size": 63488 00:12:21.944 }, 00:12:21.944 { 00:12:21.944 "name": "BaseBdev2", 00:12:21.944 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:21.944 "is_configured": true, 00:12:21.944 "data_offset": 2048, 00:12:21.944 "data_size": 63488 00:12:21.944 } 00:12:21.944 ] 00:12:21.944 }' 00:12:21.944 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.944 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.204 [2024-11-27 11:50:48.406263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:22.204 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:22.465 [2024-11-27 11:50:48.697493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:22.465 /dev/nbd0 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.465 1+0 records in 00:12:22.465 1+0 records out 00:12:22.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423089 s, 9.7 MB/s 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:22.465 11:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:26.669 63488+0 records in 00:12:26.669 63488+0 records out 00:12:26.669 32505856 bytes (33 MB, 31 MiB) copied, 4.16328 s, 7.8 MB/s 00:12:26.669 11:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:26.669 11:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:26.669 11:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:26.669 11:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:26.669 11:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:26.669 11:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:26.669 11:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:26.929 [2024-11-27 11:50:53.138511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.929 [2024-11-27 11:50:53.174539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.929 "name": "raid_bdev1", 00:12:26.929 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:26.929 "strip_size_kb": 0, 00:12:26.929 "state": "online", 00:12:26.929 "raid_level": "raid1", 00:12:26.929 "superblock": true, 00:12:26.929 "num_base_bdevs": 2, 00:12:26.929 "num_base_bdevs_discovered": 1, 00:12:26.929 "num_base_bdevs_operational": 1, 00:12:26.929 "base_bdevs_list": [ 00:12:26.929 { 00:12:26.929 "name": null, 00:12:26.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.929 "is_configured": false, 00:12:26.929 "data_offset": 0, 00:12:26.929 "data_size": 63488 00:12:26.929 }, 00:12:26.929 { 00:12:26.929 "name": "BaseBdev2", 00:12:26.929 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:26.929 "is_configured": true, 00:12:26.929 "data_offset": 2048, 00:12:26.929 "data_size": 63488 00:12:26.929 } 00:12:26.929 ] 00:12:26.929 }' 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.929 11:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.497 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:27.497 11:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.497 11:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.497 [2024-11-27 11:50:53.609858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:27.497 [2024-11-27 11:50:53.627750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:27.497 11:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.497 [2024-11-27 11:50:53.629700] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:27.497 11:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:28.438 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.438 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.438 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.438 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.438 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.438 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.438 11:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.438 11:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.438 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.438 11:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.438 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.438 "name": "raid_bdev1", 00:12:28.438 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:28.438 "strip_size_kb": 0, 00:12:28.438 "state": "online", 00:12:28.438 "raid_level": "raid1", 00:12:28.438 "superblock": true, 00:12:28.438 "num_base_bdevs": 2, 00:12:28.438 "num_base_bdevs_discovered": 2, 00:12:28.438 "num_base_bdevs_operational": 2, 00:12:28.438 "process": { 00:12:28.438 "type": "rebuild", 00:12:28.438 "target": "spare", 00:12:28.438 "progress": { 00:12:28.438 "blocks": 20480, 00:12:28.438 "percent": 32 00:12:28.438 } 00:12:28.438 }, 00:12:28.438 "base_bdevs_list": [ 00:12:28.438 { 00:12:28.438 "name": "spare", 00:12:28.438 "uuid": "155c06aa-6d98-5e43-8466-cae67c866870", 00:12:28.438 "is_configured": true, 00:12:28.438 "data_offset": 2048, 00:12:28.438 "data_size": 63488 00:12:28.438 }, 00:12:28.438 { 00:12:28.438 "name": "BaseBdev2", 00:12:28.438 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:28.438 "is_configured": true, 00:12:28.438 "data_offset": 2048, 00:12:28.438 "data_size": 63488 00:12:28.438 } 00:12:28.438 ] 00:12:28.438 }' 00:12:28.438 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.438 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:28.438 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.438 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.438 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:28.438 11:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.438 11:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.438 [2024-11-27 11:50:54.745332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:28.697 [2024-11-27 11:50:54.835758] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:28.697 [2024-11-27 11:50:54.835864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.697 [2024-11-27 11:50:54.835883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:28.697 [2024-11-27 11:50:54.835898] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.697 "name": "raid_bdev1", 00:12:28.697 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:28.697 "strip_size_kb": 0, 00:12:28.697 "state": "online", 00:12:28.697 "raid_level": "raid1", 00:12:28.697 "superblock": true, 00:12:28.697 "num_base_bdevs": 2, 00:12:28.697 "num_base_bdevs_discovered": 1, 00:12:28.697 "num_base_bdevs_operational": 1, 00:12:28.697 "base_bdevs_list": [ 00:12:28.697 { 00:12:28.697 "name": null, 00:12:28.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.697 "is_configured": false, 00:12:28.697 "data_offset": 0, 00:12:28.697 "data_size": 63488 00:12:28.697 }, 00:12:28.697 { 00:12:28.697 "name": "BaseBdev2", 00:12:28.697 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:28.697 "is_configured": true, 00:12:28.697 "data_offset": 2048, 00:12:28.697 "data_size": 63488 00:12:28.697 } 00:12:28.697 ] 00:12:28.697 }' 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.697 11:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.957 11:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:28.957 11:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.957 11:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:28.957 11:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:28.957 11:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.957 11:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.957 11:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.957 11:50:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.957 11:50:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.957 11:50:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.957 11:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.957 "name": "raid_bdev1", 00:12:28.957 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:28.957 "strip_size_kb": 0, 00:12:28.957 "state": "online", 00:12:28.957 "raid_level": "raid1", 00:12:28.957 "superblock": true, 00:12:28.957 "num_base_bdevs": 2, 00:12:28.957 "num_base_bdevs_discovered": 1, 00:12:28.957 "num_base_bdevs_operational": 1, 00:12:28.957 "base_bdevs_list": [ 00:12:28.957 { 00:12:28.957 "name": null, 00:12:28.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.957 "is_configured": false, 00:12:28.957 "data_offset": 0, 00:12:28.957 "data_size": 63488 00:12:28.957 }, 00:12:28.957 { 00:12:28.957 "name": "BaseBdev2", 00:12:28.957 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:28.957 "is_configured": true, 00:12:28.957 "data_offset": 2048, 00:12:28.957 "data_size": 63488 00:12:28.957 } 00:12:28.957 ] 00:12:28.957 }' 00:12:29.217 11:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.217 11:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:29.217 11:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.217 11:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:29.217 11:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:29.217 11:50:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.217 11:50:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.217 [2024-11-27 11:50:55.423552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:29.217 [2024-11-27 11:50:55.440579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:29.217 11:50:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.217 11:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:29.217 [2024-11-27 11:50:55.442425] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:30.154 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.154 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.154 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.154 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.154 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.154 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.154 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.154 11:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.154 11:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.154 11:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.154 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.154 "name": "raid_bdev1", 00:12:30.154 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:30.154 "strip_size_kb": 0, 00:12:30.154 "state": "online", 00:12:30.154 "raid_level": "raid1", 00:12:30.154 "superblock": true, 00:12:30.154 "num_base_bdevs": 2, 00:12:30.154 "num_base_bdevs_discovered": 2, 00:12:30.154 "num_base_bdevs_operational": 2, 00:12:30.154 "process": { 00:12:30.154 "type": "rebuild", 00:12:30.154 "target": "spare", 00:12:30.154 "progress": { 00:12:30.154 "blocks": 20480, 00:12:30.154 "percent": 32 00:12:30.154 } 00:12:30.154 }, 00:12:30.154 "base_bdevs_list": [ 00:12:30.154 { 00:12:30.154 "name": "spare", 00:12:30.154 "uuid": "155c06aa-6d98-5e43-8466-cae67c866870", 00:12:30.154 "is_configured": true, 00:12:30.154 "data_offset": 2048, 00:12:30.154 "data_size": 63488 00:12:30.154 }, 00:12:30.154 { 00:12:30.154 "name": "BaseBdev2", 00:12:30.154 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:30.154 "is_configured": true, 00:12:30.154 "data_offset": 2048, 00:12:30.154 "data_size": 63488 00:12:30.154 } 00:12:30.154 ] 00:12:30.154 }' 00:12:30.154 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:30.413 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=391 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.413 11:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.414 11:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.414 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.414 "name": "raid_bdev1", 00:12:30.414 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:30.414 "strip_size_kb": 0, 00:12:30.414 "state": "online", 00:12:30.414 "raid_level": "raid1", 00:12:30.414 "superblock": true, 00:12:30.414 "num_base_bdevs": 2, 00:12:30.414 "num_base_bdevs_discovered": 2, 00:12:30.414 "num_base_bdevs_operational": 2, 00:12:30.414 "process": { 00:12:30.414 "type": "rebuild", 00:12:30.414 "target": "spare", 00:12:30.414 "progress": { 00:12:30.414 "blocks": 22528, 00:12:30.414 "percent": 35 00:12:30.414 } 00:12:30.414 }, 00:12:30.414 "base_bdevs_list": [ 00:12:30.414 { 00:12:30.414 "name": "spare", 00:12:30.414 "uuid": "155c06aa-6d98-5e43-8466-cae67c866870", 00:12:30.414 "is_configured": true, 00:12:30.414 "data_offset": 2048, 00:12:30.414 "data_size": 63488 00:12:30.414 }, 00:12:30.414 { 00:12:30.414 "name": "BaseBdev2", 00:12:30.414 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:30.414 "is_configured": true, 00:12:30.414 "data_offset": 2048, 00:12:30.414 "data_size": 63488 00:12:30.414 } 00:12:30.414 ] 00:12:30.414 }' 00:12:30.414 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.414 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:30.414 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.414 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:30.414 11:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:31.351 11:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:31.351 11:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.351 11:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.351 11:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.351 11:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.351 11:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.351 11:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.351 11:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.351 11:50:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.351 11:50:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.351 11:50:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.610 11:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.610 "name": "raid_bdev1", 00:12:31.610 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:31.610 "strip_size_kb": 0, 00:12:31.610 "state": "online", 00:12:31.610 "raid_level": "raid1", 00:12:31.610 "superblock": true, 00:12:31.610 "num_base_bdevs": 2, 00:12:31.610 "num_base_bdevs_discovered": 2, 00:12:31.610 "num_base_bdevs_operational": 2, 00:12:31.610 "process": { 00:12:31.610 "type": "rebuild", 00:12:31.610 "target": "spare", 00:12:31.610 "progress": { 00:12:31.610 "blocks": 45056, 00:12:31.610 "percent": 70 00:12:31.610 } 00:12:31.610 }, 00:12:31.610 "base_bdevs_list": [ 00:12:31.610 { 00:12:31.610 "name": "spare", 00:12:31.610 "uuid": "155c06aa-6d98-5e43-8466-cae67c866870", 00:12:31.610 "is_configured": true, 00:12:31.610 "data_offset": 2048, 00:12:31.610 "data_size": 63488 00:12:31.610 }, 00:12:31.610 { 00:12:31.610 "name": "BaseBdev2", 00:12:31.610 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:31.610 "is_configured": true, 00:12:31.610 "data_offset": 2048, 00:12:31.610 "data_size": 63488 00:12:31.610 } 00:12:31.610 ] 00:12:31.610 }' 00:12:31.610 11:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.610 11:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.610 11:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.610 11:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.610 11:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:32.179 [2024-11-27 11:50:58.557240] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:32.179 [2024-11-27 11:50:58.557334] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:32.179 [2024-11-27 11:50:58.557474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.748 "name": "raid_bdev1", 00:12:32.748 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:32.748 "strip_size_kb": 0, 00:12:32.748 "state": "online", 00:12:32.748 "raid_level": "raid1", 00:12:32.748 "superblock": true, 00:12:32.748 "num_base_bdevs": 2, 00:12:32.748 "num_base_bdevs_discovered": 2, 00:12:32.748 "num_base_bdevs_operational": 2, 00:12:32.748 "base_bdevs_list": [ 00:12:32.748 { 00:12:32.748 "name": "spare", 00:12:32.748 "uuid": "155c06aa-6d98-5e43-8466-cae67c866870", 00:12:32.748 "is_configured": true, 00:12:32.748 "data_offset": 2048, 00:12:32.748 "data_size": 63488 00:12:32.748 }, 00:12:32.748 { 00:12:32.748 "name": "BaseBdev2", 00:12:32.748 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:32.748 "is_configured": true, 00:12:32.748 "data_offset": 2048, 00:12:32.748 "data_size": 63488 00:12:32.748 } 00:12:32.748 ] 00:12:32.748 }' 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.748 11:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.748 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.748 "name": "raid_bdev1", 00:12:32.748 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:32.749 "strip_size_kb": 0, 00:12:32.749 "state": "online", 00:12:32.749 "raid_level": "raid1", 00:12:32.749 "superblock": true, 00:12:32.749 "num_base_bdevs": 2, 00:12:32.749 "num_base_bdevs_discovered": 2, 00:12:32.749 "num_base_bdevs_operational": 2, 00:12:32.749 "base_bdevs_list": [ 00:12:32.749 { 00:12:32.749 "name": "spare", 00:12:32.749 "uuid": "155c06aa-6d98-5e43-8466-cae67c866870", 00:12:32.749 "is_configured": true, 00:12:32.749 "data_offset": 2048, 00:12:32.749 "data_size": 63488 00:12:32.749 }, 00:12:32.749 { 00:12:32.749 "name": "BaseBdev2", 00:12:32.749 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:32.749 "is_configured": true, 00:12:32.749 "data_offset": 2048, 00:12:32.749 "data_size": 63488 00:12:32.749 } 00:12:32.749 ] 00:12:32.749 }' 00:12:32.749 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.749 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:32.749 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.007 "name": "raid_bdev1", 00:12:33.007 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:33.007 "strip_size_kb": 0, 00:12:33.007 "state": "online", 00:12:33.007 "raid_level": "raid1", 00:12:33.007 "superblock": true, 00:12:33.007 "num_base_bdevs": 2, 00:12:33.007 "num_base_bdevs_discovered": 2, 00:12:33.007 "num_base_bdevs_operational": 2, 00:12:33.007 "base_bdevs_list": [ 00:12:33.007 { 00:12:33.007 "name": "spare", 00:12:33.007 "uuid": "155c06aa-6d98-5e43-8466-cae67c866870", 00:12:33.007 "is_configured": true, 00:12:33.007 "data_offset": 2048, 00:12:33.007 "data_size": 63488 00:12:33.007 }, 00:12:33.007 { 00:12:33.007 "name": "BaseBdev2", 00:12:33.007 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:33.007 "is_configured": true, 00:12:33.007 "data_offset": 2048, 00:12:33.007 "data_size": 63488 00:12:33.007 } 00:12:33.007 ] 00:12:33.007 }' 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.007 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.265 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:33.265 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.265 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.265 [2024-11-27 11:50:59.615772] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:33.265 [2024-11-27 11:50:59.615819] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:33.265 [2024-11-27 11:50:59.615932] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.265 [2024-11-27 11:50:59.616014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.265 [2024-11-27 11:50:59.616026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:33.265 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.265 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.265 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.265 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.265 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:33.265 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.525 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:33.525 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:33.525 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:33.525 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:33.525 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.525 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:33.525 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:33.525 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:33.525 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:33.525 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:33.525 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:33.525 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:33.525 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:33.525 /dev/nbd0 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.786 1+0 records in 00:12:33.786 1+0 records out 00:12:33.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347928 s, 11.8 MB/s 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:33.786 11:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:33.786 /dev/nbd1 00:12:33.786 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:33.786 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:33.786 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:33.786 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:33.786 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:33.786 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:33.786 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.045 1+0 records in 00:12:34.045 1+0 records out 00:12:34.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300754 s, 13.6 MB/s 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.045 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:34.304 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:34.304 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:34.304 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:34.304 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.304 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.304 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:34.304 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:34.304 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.304 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.304 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.565 [2024-11-27 11:51:00.815814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:34.565 [2024-11-27 11:51:00.815905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.565 [2024-11-27 11:51:00.815933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:34.565 [2024-11-27 11:51:00.815943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.565 [2024-11-27 11:51:00.818308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.565 [2024-11-27 11:51:00.818352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:34.565 [2024-11-27 11:51:00.818470] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:34.565 [2024-11-27 11:51:00.818536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:34.565 [2024-11-27 11:51:00.818713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.565 spare 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.565 [2024-11-27 11:51:00.918640] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:34.565 [2024-11-27 11:51:00.918689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:34.565 [2024-11-27 11:51:00.919091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:34.565 [2024-11-27 11:51:00.919321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:34.565 [2024-11-27 11:51:00.919345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:34.565 [2024-11-27 11:51:00.919595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.565 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.825 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.825 "name": "raid_bdev1", 00:12:34.825 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:34.825 "strip_size_kb": 0, 00:12:34.825 "state": "online", 00:12:34.825 "raid_level": "raid1", 00:12:34.825 "superblock": true, 00:12:34.825 "num_base_bdevs": 2, 00:12:34.825 "num_base_bdevs_discovered": 2, 00:12:34.825 "num_base_bdevs_operational": 2, 00:12:34.825 "base_bdevs_list": [ 00:12:34.825 { 00:12:34.825 "name": "spare", 00:12:34.825 "uuid": "155c06aa-6d98-5e43-8466-cae67c866870", 00:12:34.825 "is_configured": true, 00:12:34.825 "data_offset": 2048, 00:12:34.825 "data_size": 63488 00:12:34.825 }, 00:12:34.825 { 00:12:34.825 "name": "BaseBdev2", 00:12:34.825 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:34.825 "is_configured": true, 00:12:34.825 "data_offset": 2048, 00:12:34.825 "data_size": 63488 00:12:34.825 } 00:12:34.825 ] 00:12:34.825 }' 00:12:34.825 11:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.825 11:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.083 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:35.083 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.083 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:35.083 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:35.083 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.083 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.083 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.083 11:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.083 11:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.083 11:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.083 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.083 "name": "raid_bdev1", 00:12:35.083 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:35.083 "strip_size_kb": 0, 00:12:35.083 "state": "online", 00:12:35.083 "raid_level": "raid1", 00:12:35.083 "superblock": true, 00:12:35.083 "num_base_bdevs": 2, 00:12:35.083 "num_base_bdevs_discovered": 2, 00:12:35.083 "num_base_bdevs_operational": 2, 00:12:35.083 "base_bdevs_list": [ 00:12:35.083 { 00:12:35.083 "name": "spare", 00:12:35.083 "uuid": "155c06aa-6d98-5e43-8466-cae67c866870", 00:12:35.083 "is_configured": true, 00:12:35.083 "data_offset": 2048, 00:12:35.083 "data_size": 63488 00:12:35.083 }, 00:12:35.083 { 00:12:35.083 "name": "BaseBdev2", 00:12:35.083 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:35.083 "is_configured": true, 00:12:35.083 "data_offset": 2048, 00:12:35.083 "data_size": 63488 00:12:35.083 } 00:12:35.083 ] 00:12:35.083 }' 00:12:35.083 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.083 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:35.083 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.084 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:35.084 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.084 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:35.084 11:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.084 11:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.343 [2024-11-27 11:51:01.506782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.343 "name": "raid_bdev1", 00:12:35.343 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:35.343 "strip_size_kb": 0, 00:12:35.343 "state": "online", 00:12:35.343 "raid_level": "raid1", 00:12:35.343 "superblock": true, 00:12:35.343 "num_base_bdevs": 2, 00:12:35.343 "num_base_bdevs_discovered": 1, 00:12:35.343 "num_base_bdevs_operational": 1, 00:12:35.343 "base_bdevs_list": [ 00:12:35.343 { 00:12:35.343 "name": null, 00:12:35.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.343 "is_configured": false, 00:12:35.343 "data_offset": 0, 00:12:35.343 "data_size": 63488 00:12:35.343 }, 00:12:35.343 { 00:12:35.343 "name": "BaseBdev2", 00:12:35.343 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:35.343 "is_configured": true, 00:12:35.343 "data_offset": 2048, 00:12:35.343 "data_size": 63488 00:12:35.343 } 00:12:35.343 ] 00:12:35.343 }' 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.343 11:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.602 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:35.602 11:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.602 11:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.602 [2024-11-27 11:51:01.934101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:35.602 [2024-11-27 11:51:01.934320] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:35.602 [2024-11-27 11:51:01.934348] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:35.602 [2024-11-27 11:51:01.934395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:35.602 [2024-11-27 11:51:01.950886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:35.602 11:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.602 11:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:35.602 [2024-11-27 11:51:01.952799] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:36.979 11:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.979 11:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.979 11:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.979 11:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.979 11:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.979 11:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.979 11:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.979 11:51:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.979 11:51:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.979 11:51:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.979 "name": "raid_bdev1", 00:12:36.979 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:36.979 "strip_size_kb": 0, 00:12:36.979 "state": "online", 00:12:36.979 "raid_level": "raid1", 00:12:36.979 "superblock": true, 00:12:36.979 "num_base_bdevs": 2, 00:12:36.979 "num_base_bdevs_discovered": 2, 00:12:36.979 "num_base_bdevs_operational": 2, 00:12:36.979 "process": { 00:12:36.979 "type": "rebuild", 00:12:36.979 "target": "spare", 00:12:36.979 "progress": { 00:12:36.979 "blocks": 20480, 00:12:36.979 "percent": 32 00:12:36.979 } 00:12:36.979 }, 00:12:36.979 "base_bdevs_list": [ 00:12:36.979 { 00:12:36.979 "name": "spare", 00:12:36.979 "uuid": "155c06aa-6d98-5e43-8466-cae67c866870", 00:12:36.979 "is_configured": true, 00:12:36.979 "data_offset": 2048, 00:12:36.979 "data_size": 63488 00:12:36.979 }, 00:12:36.979 { 00:12:36.979 "name": "BaseBdev2", 00:12:36.979 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:36.979 "is_configured": true, 00:12:36.979 "data_offset": 2048, 00:12:36.979 "data_size": 63488 00:12:36.979 } 00:12:36.979 ] 00:12:36.979 }' 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.979 [2024-11-27 11:51:03.092319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.979 [2024-11-27 11:51:03.158556] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:36.979 [2024-11-27 11:51:03.158668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.979 [2024-11-27 11:51:03.158683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.979 [2024-11-27 11:51:03.158693] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.979 "name": "raid_bdev1", 00:12:36.979 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:36.979 "strip_size_kb": 0, 00:12:36.979 "state": "online", 00:12:36.979 "raid_level": "raid1", 00:12:36.979 "superblock": true, 00:12:36.979 "num_base_bdevs": 2, 00:12:36.979 "num_base_bdevs_discovered": 1, 00:12:36.979 "num_base_bdevs_operational": 1, 00:12:36.979 "base_bdevs_list": [ 00:12:36.979 { 00:12:36.979 "name": null, 00:12:36.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.979 "is_configured": false, 00:12:36.979 "data_offset": 0, 00:12:36.979 "data_size": 63488 00:12:36.979 }, 00:12:36.979 { 00:12:36.979 "name": "BaseBdev2", 00:12:36.979 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:36.979 "is_configured": true, 00:12:36.979 "data_offset": 2048, 00:12:36.979 "data_size": 63488 00:12:36.979 } 00:12:36.979 ] 00:12:36.979 }' 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.979 11:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.548 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:37.548 11:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.548 11:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.548 [2024-11-27 11:51:03.693063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:37.548 [2024-11-27 11:51:03.693136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.548 [2024-11-27 11:51:03.693158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:37.548 [2024-11-27 11:51:03.693169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.548 [2024-11-27 11:51:03.693668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.548 [2024-11-27 11:51:03.693700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:37.548 [2024-11-27 11:51:03.693799] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:37.548 [2024-11-27 11:51:03.693822] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:37.548 [2024-11-27 11:51:03.693833] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:37.548 [2024-11-27 11:51:03.693871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:37.548 [2024-11-27 11:51:03.709064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:37.548 spare 00:12:37.548 11:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.548 [2024-11-27 11:51:03.710930] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:37.548 11:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:38.488 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.488 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.488 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.488 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.488 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.488 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.488 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.488 11:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.488 11:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.488 11:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.488 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.488 "name": "raid_bdev1", 00:12:38.488 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:38.488 "strip_size_kb": 0, 00:12:38.488 "state": "online", 00:12:38.488 "raid_level": "raid1", 00:12:38.488 "superblock": true, 00:12:38.488 "num_base_bdevs": 2, 00:12:38.488 "num_base_bdevs_discovered": 2, 00:12:38.488 "num_base_bdevs_operational": 2, 00:12:38.488 "process": { 00:12:38.488 "type": "rebuild", 00:12:38.488 "target": "spare", 00:12:38.488 "progress": { 00:12:38.488 "blocks": 20480, 00:12:38.488 "percent": 32 00:12:38.488 } 00:12:38.488 }, 00:12:38.488 "base_bdevs_list": [ 00:12:38.488 { 00:12:38.488 "name": "spare", 00:12:38.488 "uuid": "155c06aa-6d98-5e43-8466-cae67c866870", 00:12:38.488 "is_configured": true, 00:12:38.488 "data_offset": 2048, 00:12:38.488 "data_size": 63488 00:12:38.488 }, 00:12:38.488 { 00:12:38.488 "name": "BaseBdev2", 00:12:38.488 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:38.488 "is_configured": true, 00:12:38.488 "data_offset": 2048, 00:12:38.488 "data_size": 63488 00:12:38.488 } 00:12:38.488 ] 00:12:38.488 }' 00:12:38.488 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.488 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.488 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.488 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.488 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:38.488 11:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.488 11:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.489 [2024-11-27 11:51:04.842533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:38.748 [2024-11-27 11:51:04.916838] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:38.748 [2024-11-27 11:51:04.916935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.748 [2024-11-27 11:51:04.916958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:38.748 [2024-11-27 11:51:04.916968] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:38.748 11:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.748 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:38.748 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.748 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.748 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.748 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.748 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:38.748 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.748 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.748 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.748 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.748 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.748 11:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.748 11:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.748 11:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.748 11:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.748 11:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.748 "name": "raid_bdev1", 00:12:38.748 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:38.748 "strip_size_kb": 0, 00:12:38.748 "state": "online", 00:12:38.748 "raid_level": "raid1", 00:12:38.748 "superblock": true, 00:12:38.748 "num_base_bdevs": 2, 00:12:38.748 "num_base_bdevs_discovered": 1, 00:12:38.748 "num_base_bdevs_operational": 1, 00:12:38.748 "base_bdevs_list": [ 00:12:38.748 { 00:12:38.748 "name": null, 00:12:38.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.748 "is_configured": false, 00:12:38.748 "data_offset": 0, 00:12:38.748 "data_size": 63488 00:12:38.748 }, 00:12:38.748 { 00:12:38.748 "name": "BaseBdev2", 00:12:38.748 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:38.748 "is_configured": true, 00:12:38.748 "data_offset": 2048, 00:12:38.748 "data_size": 63488 00:12:38.748 } 00:12:38.748 ] 00:12:38.748 }' 00:12:38.748 11:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.748 11:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.317 11:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:39.317 11:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.317 11:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:39.317 11:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.318 "name": "raid_bdev1", 00:12:39.318 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:39.318 "strip_size_kb": 0, 00:12:39.318 "state": "online", 00:12:39.318 "raid_level": "raid1", 00:12:39.318 "superblock": true, 00:12:39.318 "num_base_bdevs": 2, 00:12:39.318 "num_base_bdevs_discovered": 1, 00:12:39.318 "num_base_bdevs_operational": 1, 00:12:39.318 "base_bdevs_list": [ 00:12:39.318 { 00:12:39.318 "name": null, 00:12:39.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.318 "is_configured": false, 00:12:39.318 "data_offset": 0, 00:12:39.318 "data_size": 63488 00:12:39.318 }, 00:12:39.318 { 00:12:39.318 "name": "BaseBdev2", 00:12:39.318 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:39.318 "is_configured": true, 00:12:39.318 "data_offset": 2048, 00:12:39.318 "data_size": 63488 00:12:39.318 } 00:12:39.318 ] 00:12:39.318 }' 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.318 [2024-11-27 11:51:05.537429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:39.318 [2024-11-27 11:51:05.537500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.318 [2024-11-27 11:51:05.537531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:39.318 [2024-11-27 11:51:05.537551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.318 [2024-11-27 11:51:05.538035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.318 [2024-11-27 11:51:05.538060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:39.318 [2024-11-27 11:51:05.538154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:39.318 [2024-11-27 11:51:05.538170] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:39.318 [2024-11-27 11:51:05.538183] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:39.318 [2024-11-27 11:51:05.538194] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:39.318 BaseBdev1 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.318 11:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:40.254 11:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:40.254 11:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.254 11:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.254 11:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.254 11:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.254 11:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:40.254 11:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.254 11:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.254 11:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.254 11:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.254 11:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.254 11:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.254 11:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.254 11:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.254 11:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.254 11:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.254 "name": "raid_bdev1", 00:12:40.254 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:40.254 "strip_size_kb": 0, 00:12:40.254 "state": "online", 00:12:40.254 "raid_level": "raid1", 00:12:40.254 "superblock": true, 00:12:40.254 "num_base_bdevs": 2, 00:12:40.254 "num_base_bdevs_discovered": 1, 00:12:40.254 "num_base_bdevs_operational": 1, 00:12:40.254 "base_bdevs_list": [ 00:12:40.254 { 00:12:40.254 "name": null, 00:12:40.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.254 "is_configured": false, 00:12:40.254 "data_offset": 0, 00:12:40.254 "data_size": 63488 00:12:40.254 }, 00:12:40.254 { 00:12:40.254 "name": "BaseBdev2", 00:12:40.254 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:40.254 "is_configured": true, 00:12:40.254 "data_offset": 2048, 00:12:40.254 "data_size": 63488 00:12:40.254 } 00:12:40.254 ] 00:12:40.254 }' 00:12:40.254 11:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.254 11:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.824 "name": "raid_bdev1", 00:12:40.824 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:40.824 "strip_size_kb": 0, 00:12:40.824 "state": "online", 00:12:40.824 "raid_level": "raid1", 00:12:40.824 "superblock": true, 00:12:40.824 "num_base_bdevs": 2, 00:12:40.824 "num_base_bdevs_discovered": 1, 00:12:40.824 "num_base_bdevs_operational": 1, 00:12:40.824 "base_bdevs_list": [ 00:12:40.824 { 00:12:40.824 "name": null, 00:12:40.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.824 "is_configured": false, 00:12:40.824 "data_offset": 0, 00:12:40.824 "data_size": 63488 00:12:40.824 }, 00:12:40.824 { 00:12:40.824 "name": "BaseBdev2", 00:12:40.824 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:40.824 "is_configured": true, 00:12:40.824 "data_offset": 2048, 00:12:40.824 "data_size": 63488 00:12:40.824 } 00:12:40.824 ] 00:12:40.824 }' 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.824 [2024-11-27 11:51:07.166741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.824 [2024-11-27 11:51:07.166948] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:40.824 [2024-11-27 11:51:07.166976] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:40.824 request: 00:12:40.824 { 00:12:40.824 "base_bdev": "BaseBdev1", 00:12:40.824 "raid_bdev": "raid_bdev1", 00:12:40.824 "method": "bdev_raid_add_base_bdev", 00:12:40.824 "req_id": 1 00:12:40.824 } 00:12:40.824 Got JSON-RPC error response 00:12:40.824 response: 00:12:40.824 { 00:12:40.824 "code": -22, 00:12:40.824 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:40.824 } 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:40.824 11:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:42.206 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:42.206 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.206 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.206 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.206 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.206 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:42.206 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.206 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.206 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.206 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.206 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.206 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.206 11:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.206 11:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.206 11:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.206 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.206 "name": "raid_bdev1", 00:12:42.206 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:42.206 "strip_size_kb": 0, 00:12:42.206 "state": "online", 00:12:42.206 "raid_level": "raid1", 00:12:42.206 "superblock": true, 00:12:42.206 "num_base_bdevs": 2, 00:12:42.206 "num_base_bdevs_discovered": 1, 00:12:42.206 "num_base_bdevs_operational": 1, 00:12:42.206 "base_bdevs_list": [ 00:12:42.206 { 00:12:42.206 "name": null, 00:12:42.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.206 "is_configured": false, 00:12:42.206 "data_offset": 0, 00:12:42.206 "data_size": 63488 00:12:42.206 }, 00:12:42.206 { 00:12:42.206 "name": "BaseBdev2", 00:12:42.206 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:42.206 "is_configured": true, 00:12:42.206 "data_offset": 2048, 00:12:42.206 "data_size": 63488 00:12:42.206 } 00:12:42.206 ] 00:12:42.206 }' 00:12:42.206 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.206 11:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.466 "name": "raid_bdev1", 00:12:42.466 "uuid": "d15c1cda-a813-4a09-8462-5ecf113e3985", 00:12:42.466 "strip_size_kb": 0, 00:12:42.466 "state": "online", 00:12:42.466 "raid_level": "raid1", 00:12:42.466 "superblock": true, 00:12:42.466 "num_base_bdevs": 2, 00:12:42.466 "num_base_bdevs_discovered": 1, 00:12:42.466 "num_base_bdevs_operational": 1, 00:12:42.466 "base_bdevs_list": [ 00:12:42.466 { 00:12:42.466 "name": null, 00:12:42.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.466 "is_configured": false, 00:12:42.466 "data_offset": 0, 00:12:42.466 "data_size": 63488 00:12:42.466 }, 00:12:42.466 { 00:12:42.466 "name": "BaseBdev2", 00:12:42.466 "uuid": "09a49e8a-c055-5261-bbf9-bb2886c43dfb", 00:12:42.466 "is_configured": true, 00:12:42.466 "data_offset": 2048, 00:12:42.466 "data_size": 63488 00:12:42.466 } 00:12:42.466 ] 00:12:42.466 }' 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75724 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75724 ']' 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75724 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75724 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.466 killing process with pid 75724 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75724' 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75724 00:12:42.466 Received shutdown signal, test time was about 60.000000 seconds 00:12:42.466 00:12:42.466 Latency(us) 00:12:42.466 [2024-11-27T11:51:08.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.466 [2024-11-27T11:51:08.851Z] =================================================================================================================== 00:12:42.466 [2024-11-27T11:51:08.851Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:42.466 [2024-11-27 11:51:08.746122] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:42.466 [2024-11-27 11:51:08.746249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.466 11:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75724 00:12:42.466 [2024-11-27 11:51:08.746328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:42.466 [2024-11-27 11:51:08.746342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:42.726 [2024-11-27 11:51:09.055871] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:44.106 00:12:44.106 real 0m23.327s 00:12:44.106 user 0m28.353s 00:12:44.106 sys 0m3.563s 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.106 ************************************ 00:12:44.106 END TEST raid_rebuild_test_sb 00:12:44.106 ************************************ 00:12:44.106 11:51:10 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:44.106 11:51:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:44.106 11:51:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.106 11:51:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:44.106 ************************************ 00:12:44.106 START TEST raid_rebuild_test_io 00:12:44.106 ************************************ 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76453 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76453 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76453 ']' 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.106 11:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.106 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:44.106 Zero copy mechanism will not be used. 00:12:44.106 [2024-11-27 11:51:10.357074] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:12:44.106 [2024-11-27 11:51:10.357183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76453 ] 00:12:44.366 [2024-11-27 11:51:10.532222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.366 [2024-11-27 11:51:10.649590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.627 [2024-11-27 11:51:10.851490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.627 [2024-11-27 11:51:10.851562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.887 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.887 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:44.887 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:44.887 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:44.887 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.887 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.887 BaseBdev1_malloc 00:12:44.887 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.887 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:44.887 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.887 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.887 [2024-11-27 11:51:11.247652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:44.887 [2024-11-27 11:51:11.247721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.887 [2024-11-27 11:51:11.247747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:44.887 [2024-11-27 11:51:11.247760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.887 [2024-11-27 11:51:11.249935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.887 [2024-11-27 11:51:11.249973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:44.887 BaseBdev1 00:12:44.887 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.887 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:44.887 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:44.887 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.887 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.147 BaseBdev2_malloc 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.147 [2024-11-27 11:51:11.304295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:45.147 [2024-11-27 11:51:11.304362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.147 [2024-11-27 11:51:11.304385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:45.147 [2024-11-27 11:51:11.304398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.147 [2024-11-27 11:51:11.306613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.147 [2024-11-27 11:51:11.306649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:45.147 BaseBdev2 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.147 spare_malloc 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.147 spare_delay 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.147 [2024-11-27 11:51:11.387934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:45.147 [2024-11-27 11:51:11.388011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.147 [2024-11-27 11:51:11.388038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:45.147 [2024-11-27 11:51:11.388050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.147 [2024-11-27 11:51:11.390311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.147 [2024-11-27 11:51:11.390352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:45.147 spare 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.147 [2024-11-27 11:51:11.399929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.147 [2024-11-27 11:51:11.401675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:45.147 [2024-11-27 11:51:11.401769] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:45.147 [2024-11-27 11:51:11.401783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:45.147 [2024-11-27 11:51:11.402050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:45.147 [2024-11-27 11:51:11.402224] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:45.147 [2024-11-27 11:51:11.402242] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:45.147 [2024-11-27 11:51:11.402400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.147 "name": "raid_bdev1", 00:12:45.147 "uuid": "33ed5577-e9e3-452c-8fa2-3883931bd268", 00:12:45.147 "strip_size_kb": 0, 00:12:45.147 "state": "online", 00:12:45.147 "raid_level": "raid1", 00:12:45.147 "superblock": false, 00:12:45.147 "num_base_bdevs": 2, 00:12:45.147 "num_base_bdevs_discovered": 2, 00:12:45.147 "num_base_bdevs_operational": 2, 00:12:45.147 "base_bdevs_list": [ 00:12:45.147 { 00:12:45.147 "name": "BaseBdev1", 00:12:45.147 "uuid": "7f1c5d86-ff50-59cf-8a1e-6c8190d3f26f", 00:12:45.147 "is_configured": true, 00:12:45.147 "data_offset": 0, 00:12:45.147 "data_size": 65536 00:12:45.147 }, 00:12:45.147 { 00:12:45.147 "name": "BaseBdev2", 00:12:45.147 "uuid": "975a8763-cc45-51de-b818-162da5089bea", 00:12:45.147 "is_configured": true, 00:12:45.147 "data_offset": 0, 00:12:45.147 "data_size": 65536 00:12:45.147 } 00:12:45.147 ] 00:12:45.147 }' 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.147 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.717 [2024-11-27 11:51:11.875454] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.717 [2024-11-27 11:51:11.970989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.717 11:51:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.717 11:51:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.717 11:51:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.717 "name": "raid_bdev1", 00:12:45.717 "uuid": "33ed5577-e9e3-452c-8fa2-3883931bd268", 00:12:45.717 "strip_size_kb": 0, 00:12:45.717 "state": "online", 00:12:45.717 "raid_level": "raid1", 00:12:45.717 "superblock": false, 00:12:45.717 "num_base_bdevs": 2, 00:12:45.717 "num_base_bdevs_discovered": 1, 00:12:45.717 "num_base_bdevs_operational": 1, 00:12:45.717 "base_bdevs_list": [ 00:12:45.717 { 00:12:45.717 "name": null, 00:12:45.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.717 "is_configured": false, 00:12:45.717 "data_offset": 0, 00:12:45.717 "data_size": 65536 00:12:45.717 }, 00:12:45.717 { 00:12:45.717 "name": "BaseBdev2", 00:12:45.717 "uuid": "975a8763-cc45-51de-b818-162da5089bea", 00:12:45.717 "is_configured": true, 00:12:45.717 "data_offset": 0, 00:12:45.717 "data_size": 65536 00:12:45.717 } 00:12:45.717 ] 00:12:45.717 }' 00:12:45.717 11:51:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.717 11:51:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.717 [2024-11-27 11:51:12.071212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:45.717 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:45.717 Zero copy mechanism will not be used. 00:12:45.717 Running I/O for 60 seconds... 00:12:46.285 11:51:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:46.285 11:51:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.286 11:51:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.286 [2024-11-27 11:51:12.420144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:46.286 11:51:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.286 11:51:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:46.286 [2024-11-27 11:51:12.484931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:46.286 [2024-11-27 11:51:12.486822] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:46.286 [2024-11-27 11:51:12.595753] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:46.286 [2024-11-27 11:51:12.596341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:46.545 [2024-11-27 11:51:12.824559] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:46.545 [2024-11-27 11:51:12.824927] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:47.065 177.00 IOPS, 531.00 MiB/s [2024-11-27T11:51:13.450Z] [2024-11-27 11:51:13.285429] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.324 [2024-11-27 11:51:13.518856] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.324 "name": "raid_bdev1", 00:12:47.324 "uuid": "33ed5577-e9e3-452c-8fa2-3883931bd268", 00:12:47.324 "strip_size_kb": 0, 00:12:47.324 "state": "online", 00:12:47.324 "raid_level": "raid1", 00:12:47.324 "superblock": false, 00:12:47.324 "num_base_bdevs": 2, 00:12:47.324 "num_base_bdevs_discovered": 2, 00:12:47.324 "num_base_bdevs_operational": 2, 00:12:47.324 "process": { 00:12:47.324 "type": "rebuild", 00:12:47.324 "target": "spare", 00:12:47.324 "progress": { 00:12:47.324 "blocks": 12288, 00:12:47.324 "percent": 18 00:12:47.324 } 00:12:47.324 }, 00:12:47.324 "base_bdevs_list": [ 00:12:47.324 { 00:12:47.324 "name": "spare", 00:12:47.324 "uuid": "defb986a-01f9-5f7d-8cd6-ed53fa8b25bd", 00:12:47.324 "is_configured": true, 00:12:47.324 "data_offset": 0, 00:12:47.324 "data_size": 65536 00:12:47.324 }, 00:12:47.324 { 00:12:47.324 "name": "BaseBdev2", 00:12:47.324 "uuid": "975a8763-cc45-51de-b818-162da5089bea", 00:12:47.324 "is_configured": true, 00:12:47.324 "data_offset": 0, 00:12:47.324 "data_size": 65536 00:12:47.324 } 00:12:47.324 ] 00:12:47.324 }' 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.324 [2024-11-27 11:51:13.624467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:47.324 [2024-11-27 11:51:13.642208] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:47.324 [2024-11-27 11:51:13.645009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.324 [2024-11-27 11:51:13.645043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:47.324 [2024-11-27 11:51:13.645059] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:47.324 [2024-11-27 11:51:13.687500] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.324 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.583 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:47.583 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.583 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.583 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.583 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.583 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.583 11:51:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.583 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.583 11:51:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.583 11:51:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.583 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.583 "name": "raid_bdev1", 00:12:47.583 "uuid": "33ed5577-e9e3-452c-8fa2-3883931bd268", 00:12:47.583 "strip_size_kb": 0, 00:12:47.583 "state": "online", 00:12:47.583 "raid_level": "raid1", 00:12:47.583 "superblock": false, 00:12:47.583 "num_base_bdevs": 2, 00:12:47.583 "num_base_bdevs_discovered": 1, 00:12:47.583 "num_base_bdevs_operational": 1, 00:12:47.583 "base_bdevs_list": [ 00:12:47.583 { 00:12:47.583 "name": null, 00:12:47.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.583 "is_configured": false, 00:12:47.583 "data_offset": 0, 00:12:47.583 "data_size": 65536 00:12:47.583 }, 00:12:47.583 { 00:12:47.583 "name": "BaseBdev2", 00:12:47.583 "uuid": "975a8763-cc45-51de-b818-162da5089bea", 00:12:47.583 "is_configured": true, 00:12:47.583 "data_offset": 0, 00:12:47.583 "data_size": 65536 00:12:47.583 } 00:12:47.583 ] 00:12:47.583 }' 00:12:47.583 11:51:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.583 11:51:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.843 164.00 IOPS, 492.00 MiB/s [2024-11-27T11:51:14.228Z] 11:51:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:47.843 11:51:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.843 11:51:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:47.843 11:51:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:47.843 11:51:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.843 11:51:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.843 11:51:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.843 11:51:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.843 11:51:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.843 11:51:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.843 11:51:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.843 "name": "raid_bdev1", 00:12:47.843 "uuid": "33ed5577-e9e3-452c-8fa2-3883931bd268", 00:12:47.843 "strip_size_kb": 0, 00:12:47.843 "state": "online", 00:12:47.843 "raid_level": "raid1", 00:12:47.843 "superblock": false, 00:12:47.843 "num_base_bdevs": 2, 00:12:47.843 "num_base_bdevs_discovered": 1, 00:12:47.843 "num_base_bdevs_operational": 1, 00:12:47.843 "base_bdevs_list": [ 00:12:47.843 { 00:12:47.843 "name": null, 00:12:47.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.843 "is_configured": false, 00:12:47.843 "data_offset": 0, 00:12:47.843 "data_size": 65536 00:12:47.843 }, 00:12:47.843 { 00:12:47.843 "name": "BaseBdev2", 00:12:47.843 "uuid": "975a8763-cc45-51de-b818-162da5089bea", 00:12:47.843 "is_configured": true, 00:12:47.843 "data_offset": 0, 00:12:47.843 "data_size": 65536 00:12:47.843 } 00:12:47.843 ] 00:12:47.843 }' 00:12:47.843 11:51:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.135 11:51:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:48.135 11:51:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.135 11:51:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:48.135 11:51:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:48.135 11:51:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.135 11:51:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.135 [2024-11-27 11:51:14.302856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:48.135 11:51:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.135 11:51:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:48.135 [2024-11-27 11:51:14.366975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:48.135 [2024-11-27 11:51:14.369050] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:48.135 [2024-11-27 11:51:14.481307] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:48.135 [2024-11-27 11:51:14.481966] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:48.394 [2024-11-27 11:51:14.696883] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:48.394 [2024-11-27 11:51:14.697236] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:48.654 [2024-11-27 11:51:15.017336] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:48.913 153.33 IOPS, 460.00 MiB/s [2024-11-27T11:51:15.298Z] [2024-11-27 11:51:15.226220] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:48.913 [2024-11-27 11:51:15.226576] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.173 "name": "raid_bdev1", 00:12:49.173 "uuid": "33ed5577-e9e3-452c-8fa2-3883931bd268", 00:12:49.173 "strip_size_kb": 0, 00:12:49.173 "state": "online", 00:12:49.173 "raid_level": "raid1", 00:12:49.173 "superblock": false, 00:12:49.173 "num_base_bdevs": 2, 00:12:49.173 "num_base_bdevs_discovered": 2, 00:12:49.173 "num_base_bdevs_operational": 2, 00:12:49.173 "process": { 00:12:49.173 "type": "rebuild", 00:12:49.173 "target": "spare", 00:12:49.173 "progress": { 00:12:49.173 "blocks": 10240, 00:12:49.173 "percent": 15 00:12:49.173 } 00:12:49.173 }, 00:12:49.173 "base_bdevs_list": [ 00:12:49.173 { 00:12:49.173 "name": "spare", 00:12:49.173 "uuid": "defb986a-01f9-5f7d-8cd6-ed53fa8b25bd", 00:12:49.173 "is_configured": true, 00:12:49.173 "data_offset": 0, 00:12:49.173 "data_size": 65536 00:12:49.173 }, 00:12:49.173 { 00:12:49.173 "name": "BaseBdev2", 00:12:49.173 "uuid": "975a8763-cc45-51de-b818-162da5089bea", 00:12:49.173 "is_configured": true, 00:12:49.173 "data_offset": 0, 00:12:49.173 "data_size": 65536 00:12:49.173 } 00:12:49.173 ] 00:12:49.173 }' 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=410 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.173 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.173 "name": "raid_bdev1", 00:12:49.173 "uuid": "33ed5577-e9e3-452c-8fa2-3883931bd268", 00:12:49.173 "strip_size_kb": 0, 00:12:49.173 "state": "online", 00:12:49.173 "raid_level": "raid1", 00:12:49.173 "superblock": false, 00:12:49.173 "num_base_bdevs": 2, 00:12:49.173 "num_base_bdevs_discovered": 2, 00:12:49.173 "num_base_bdevs_operational": 2, 00:12:49.173 "process": { 00:12:49.173 "type": "rebuild", 00:12:49.173 "target": "spare", 00:12:49.173 "progress": { 00:12:49.173 "blocks": 12288, 00:12:49.173 "percent": 18 00:12:49.173 } 00:12:49.173 }, 00:12:49.173 "base_bdevs_list": [ 00:12:49.173 { 00:12:49.173 "name": "spare", 00:12:49.173 "uuid": "defb986a-01f9-5f7d-8cd6-ed53fa8b25bd", 00:12:49.174 "is_configured": true, 00:12:49.174 "data_offset": 0, 00:12:49.174 "data_size": 65536 00:12:49.174 }, 00:12:49.174 { 00:12:49.174 "name": "BaseBdev2", 00:12:49.174 "uuid": "975a8763-cc45-51de-b818-162da5089bea", 00:12:49.174 "is_configured": true, 00:12:49.174 "data_offset": 0, 00:12:49.174 "data_size": 65536 00:12:49.174 } 00:12:49.174 ] 00:12:49.174 }' 00:12:49.174 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.433 [2024-11-27 11:51:15.569945] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:49.433 [2024-11-27 11:51:15.570568] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:49.433 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.433 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.433 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.433 11:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:49.433 [2024-11-27 11:51:15.772337] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:49.433 [2024-11-27 11:51:15.772711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:49.716 [2024-11-27 11:51:16.000698] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:49.974 128.75 IOPS, 386.25 MiB/s [2024-11-27T11:51:16.359Z] [2024-11-27 11:51:16.102675] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:49.974 [2024-11-27 11:51:16.103054] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:50.232 [2024-11-27 11:51:16.440487] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:50.232 [2024-11-27 11:51:16.441120] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:50.491 11:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:50.491 11:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.492 11:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.492 11:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.492 11:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.492 11:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.492 11:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.492 11:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.492 11:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.492 11:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.492 11:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.492 11:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.492 "name": "raid_bdev1", 00:12:50.492 "uuid": "33ed5577-e9e3-452c-8fa2-3883931bd268", 00:12:50.492 "strip_size_kb": 0, 00:12:50.492 "state": "online", 00:12:50.492 "raid_level": "raid1", 00:12:50.492 "superblock": false, 00:12:50.492 "num_base_bdevs": 2, 00:12:50.492 "num_base_bdevs_discovered": 2, 00:12:50.492 "num_base_bdevs_operational": 2, 00:12:50.492 "process": { 00:12:50.492 "type": "rebuild", 00:12:50.492 "target": "spare", 00:12:50.492 "progress": { 00:12:50.492 "blocks": 26624, 00:12:50.492 "percent": 40 00:12:50.492 } 00:12:50.492 }, 00:12:50.492 "base_bdevs_list": [ 00:12:50.492 { 00:12:50.492 "name": "spare", 00:12:50.492 "uuid": "defb986a-01f9-5f7d-8cd6-ed53fa8b25bd", 00:12:50.492 "is_configured": true, 00:12:50.492 "data_offset": 0, 00:12:50.492 "data_size": 65536 00:12:50.492 }, 00:12:50.492 { 00:12:50.492 "name": "BaseBdev2", 00:12:50.492 "uuid": "975a8763-cc45-51de-b818-162da5089bea", 00:12:50.492 "is_configured": true, 00:12:50.492 "data_offset": 0, 00:12:50.492 "data_size": 65536 00:12:50.492 } 00:12:50.492 ] 00:12:50.492 }' 00:12:50.492 11:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.492 11:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.492 11:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.492 11:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.492 11:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:51.084 109.00 IOPS, 327.00 MiB/s [2024-11-27T11:51:17.469Z] [2024-11-27 11:51:17.345172] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:51.653 11:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:51.653 11:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.653 11:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.653 11:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.653 11:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.653 11:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.653 11:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.653 11:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.653 11:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.653 11:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.653 [2024-11-27 11:51:17.794576] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:51.653 11:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.653 11:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.653 "name": "raid_bdev1", 00:12:51.653 "uuid": "33ed5577-e9e3-452c-8fa2-3883931bd268", 00:12:51.653 "strip_size_kb": 0, 00:12:51.653 "state": "online", 00:12:51.653 "raid_level": "raid1", 00:12:51.653 "superblock": false, 00:12:51.653 "num_base_bdevs": 2, 00:12:51.653 "num_base_bdevs_discovered": 2, 00:12:51.653 "num_base_bdevs_operational": 2, 00:12:51.653 "process": { 00:12:51.653 "type": "rebuild", 00:12:51.653 "target": "spare", 00:12:51.653 "progress": { 00:12:51.653 "blocks": 45056, 00:12:51.653 "percent": 68 00:12:51.653 } 00:12:51.653 }, 00:12:51.653 "base_bdevs_list": [ 00:12:51.653 { 00:12:51.653 "name": "spare", 00:12:51.653 "uuid": "defb986a-01f9-5f7d-8cd6-ed53fa8b25bd", 00:12:51.653 "is_configured": true, 00:12:51.653 "data_offset": 0, 00:12:51.653 "data_size": 65536 00:12:51.653 }, 00:12:51.653 { 00:12:51.653 "name": "BaseBdev2", 00:12:51.653 "uuid": "975a8763-cc45-51de-b818-162da5089bea", 00:12:51.653 "is_configured": true, 00:12:51.653 "data_offset": 0, 00:12:51.653 "data_size": 65536 00:12:51.653 } 00:12:51.653 ] 00:12:51.653 }' 00:12:51.653 11:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.653 11:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.653 11:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.653 11:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.653 11:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:51.913 98.67 IOPS, 296.00 MiB/s [2024-11-27T11:51:18.298Z] [2024-11-27 11:51:18.118748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:52.853 [2024-11-27 11:51:18.892422] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:52.853 11:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:52.853 11:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.853 11:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.853 11:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.853 11:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.853 11:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.853 11:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.853 11:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.853 11:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.853 11:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.853 11:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.853 11:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.853 "name": "raid_bdev1", 00:12:52.853 "uuid": "33ed5577-e9e3-452c-8fa2-3883931bd268", 00:12:52.853 "strip_size_kb": 0, 00:12:52.853 "state": "online", 00:12:52.853 "raid_level": "raid1", 00:12:52.853 "superblock": false, 00:12:52.853 "num_base_bdevs": 2, 00:12:52.853 "num_base_bdevs_discovered": 2, 00:12:52.853 "num_base_bdevs_operational": 2, 00:12:52.853 "process": { 00:12:52.853 "type": "rebuild", 00:12:52.853 "target": "spare", 00:12:52.853 "progress": { 00:12:52.853 "blocks": 65536, 00:12:52.853 "percent": 100 00:12:52.853 } 00:12:52.853 }, 00:12:52.853 "base_bdevs_list": [ 00:12:52.853 { 00:12:52.853 "name": "spare", 00:12:52.853 "uuid": "defb986a-01f9-5f7d-8cd6-ed53fa8b25bd", 00:12:52.853 "is_configured": true, 00:12:52.853 "data_offset": 0, 00:12:52.853 "data_size": 65536 00:12:52.853 }, 00:12:52.853 { 00:12:52.853 "name": "BaseBdev2", 00:12:52.853 "uuid": "975a8763-cc45-51de-b818-162da5089bea", 00:12:52.853 "is_configured": true, 00:12:52.853 "data_offset": 0, 00:12:52.853 "data_size": 65536 00:12:52.853 } 00:12:52.853 ] 00:12:52.853 }' 00:12:52.853 11:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.853 [2024-11-27 11:51:18.996772] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:52.853 [2024-11-27 11:51:18.999446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.853 11:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.853 11:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.853 89.43 IOPS, 268.29 MiB/s [2024-11-27T11:51:19.238Z] 11:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.853 11:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:53.792 83.12 IOPS, 249.38 MiB/s [2024-11-27T11:51:20.177Z] 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:53.792 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.792 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.792 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.792 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.792 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.792 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.792 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.792 11:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.792 11:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.792 11:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.792 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.792 "name": "raid_bdev1", 00:12:53.792 "uuid": "33ed5577-e9e3-452c-8fa2-3883931bd268", 00:12:53.792 "strip_size_kb": 0, 00:12:53.792 "state": "online", 00:12:53.792 "raid_level": "raid1", 00:12:53.792 "superblock": false, 00:12:53.792 "num_base_bdevs": 2, 00:12:53.792 "num_base_bdevs_discovered": 2, 00:12:53.792 "num_base_bdevs_operational": 2, 00:12:53.792 "base_bdevs_list": [ 00:12:53.792 { 00:12:53.792 "name": "spare", 00:12:53.792 "uuid": "defb986a-01f9-5f7d-8cd6-ed53fa8b25bd", 00:12:53.792 "is_configured": true, 00:12:53.792 "data_offset": 0, 00:12:53.792 "data_size": 65536 00:12:53.792 }, 00:12:53.792 { 00:12:53.792 "name": "BaseBdev2", 00:12:53.792 "uuid": "975a8763-cc45-51de-b818-162da5089bea", 00:12:53.792 "is_configured": true, 00:12:53.792 "data_offset": 0, 00:12:53.792 "data_size": 65536 00:12:53.792 } 00:12:53.792 ] 00:12:53.792 }' 00:12:53.792 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.792 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:53.792 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.051 "name": "raid_bdev1", 00:12:54.051 "uuid": "33ed5577-e9e3-452c-8fa2-3883931bd268", 00:12:54.051 "strip_size_kb": 0, 00:12:54.051 "state": "online", 00:12:54.051 "raid_level": "raid1", 00:12:54.051 "superblock": false, 00:12:54.051 "num_base_bdevs": 2, 00:12:54.051 "num_base_bdevs_discovered": 2, 00:12:54.051 "num_base_bdevs_operational": 2, 00:12:54.051 "base_bdevs_list": [ 00:12:54.051 { 00:12:54.051 "name": "spare", 00:12:54.051 "uuid": "defb986a-01f9-5f7d-8cd6-ed53fa8b25bd", 00:12:54.051 "is_configured": true, 00:12:54.051 "data_offset": 0, 00:12:54.051 "data_size": 65536 00:12:54.051 }, 00:12:54.051 { 00:12:54.051 "name": "BaseBdev2", 00:12:54.051 "uuid": "975a8763-cc45-51de-b818-162da5089bea", 00:12:54.051 "is_configured": true, 00:12:54.051 "data_offset": 0, 00:12:54.051 "data_size": 65536 00:12:54.051 } 00:12:54.051 ] 00:12:54.051 }' 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.051 "name": "raid_bdev1", 00:12:54.051 "uuid": "33ed5577-e9e3-452c-8fa2-3883931bd268", 00:12:54.051 "strip_size_kb": 0, 00:12:54.051 "state": "online", 00:12:54.051 "raid_level": "raid1", 00:12:54.051 "superblock": false, 00:12:54.051 "num_base_bdevs": 2, 00:12:54.051 "num_base_bdevs_discovered": 2, 00:12:54.051 "num_base_bdevs_operational": 2, 00:12:54.051 "base_bdevs_list": [ 00:12:54.051 { 00:12:54.051 "name": "spare", 00:12:54.051 "uuid": "defb986a-01f9-5f7d-8cd6-ed53fa8b25bd", 00:12:54.051 "is_configured": true, 00:12:54.051 "data_offset": 0, 00:12:54.051 "data_size": 65536 00:12:54.051 }, 00:12:54.051 { 00:12:54.051 "name": "BaseBdev2", 00:12:54.051 "uuid": "975a8763-cc45-51de-b818-162da5089bea", 00:12:54.051 "is_configured": true, 00:12:54.051 "data_offset": 0, 00:12:54.051 "data_size": 65536 00:12:54.051 } 00:12:54.051 ] 00:12:54.051 }' 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.051 11:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.620 [2024-11-27 11:51:20.768692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:54.620 [2024-11-27 11:51:20.768731] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.620 00:12:54.620 Latency(us) 00:12:54.620 [2024-11-27T11:51:21.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.620 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:54.620 raid_bdev1 : 8.80 78.83 236.48 0.00 0.00 17867.55 325.53 109436.53 00:12:54.620 [2024-11-27T11:51:21.005Z] =================================================================================================================== 00:12:54.620 [2024-11-27T11:51:21.005Z] Total : 78.83 236.48 0.00 0.00 17867.55 325.53 109436.53 00:12:54.620 [2024-11-27 11:51:20.882555] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.620 [2024-11-27 11:51:20.882633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.620 [2024-11-27 11:51:20.882711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:54.620 [2024-11-27 11:51:20.882724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:54.620 { 00:12:54.620 "results": [ 00:12:54.620 { 00:12:54.620 "job": "raid_bdev1", 00:12:54.620 "core_mask": "0x1", 00:12:54.620 "workload": "randrw", 00:12:54.620 "percentage": 50, 00:12:54.620 "status": "finished", 00:12:54.620 "queue_depth": 2, 00:12:54.620 "io_size": 3145728, 00:12:54.620 "runtime": 8.803996, 00:12:54.620 "iops": 78.82784135749266, 00:12:54.620 "mibps": 236.48352407247796, 00:12:54.620 "io_failed": 0, 00:12:54.620 "io_timeout": 0, 00:12:54.620 "avg_latency_us": 17867.55451971358, 00:12:54.620 "min_latency_us": 325.5336244541485, 00:12:54.620 "max_latency_us": 109436.5344978166 00:12:54.620 } 00:12:54.620 ], 00:12:54.620 "core_count": 1 00:12:54.620 } 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:54.620 11:51:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:54.878 /dev/nbd0 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.878 1+0 records in 00:12:54.878 1+0 records out 00:12:54.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038665 s, 10.6 MB/s 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:54.878 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:55.137 /dev/nbd1 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.137 1+0 records in 00:12:55.137 1+0 records out 00:12:55.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364178 s, 11.2 MB/s 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:55.137 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:55.396 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:55.396 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:55.396 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:55.396 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:55.396 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:55.396 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.396 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:55.655 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:55.655 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:55.655 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:55.655 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.655 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.655 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:55.655 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:55.655 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.655 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:55.655 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:55.655 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:55.655 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:55.655 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:55.655 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.655 11:51:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76453 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76453 ']' 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76453 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76453 00:12:55.939 killing process with pid 76453 00:12:55.939 Received shutdown signal, test time was about 10.161164 seconds 00:12:55.939 00:12:55.939 Latency(us) 00:12:55.939 [2024-11-27T11:51:22.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.939 [2024-11-27T11:51:22.324Z] =================================================================================================================== 00:12:55.939 [2024-11-27T11:51:22.324Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76453' 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76453 00:12:55.939 [2024-11-27 11:51:22.215032] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:55.939 11:51:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76453 00:12:56.205 [2024-11-27 11:51:22.452349] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:57.582 11:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:57.582 00:12:57.582 real 0m13.379s 00:12:57.582 user 0m16.785s 00:12:57.582 sys 0m1.499s 00:12:57.582 11:51:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.582 11:51:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.582 ************************************ 00:12:57.582 END TEST raid_rebuild_test_io 00:12:57.582 ************************************ 00:12:57.582 11:51:23 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:57.582 11:51:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:57.582 11:51:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.582 11:51:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:57.582 ************************************ 00:12:57.582 START TEST raid_rebuild_test_sb_io 00:12:57.582 ************************************ 00:12:57.582 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:57.582 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:57.582 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:57.582 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:57.582 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:57.582 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76849 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76849 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76849 ']' 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.583 11:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.583 [2024-11-27 11:51:23.812630] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:12:57.583 [2024-11-27 11:51:23.812760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76849 ] 00:12:57.583 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:57.583 Zero copy mechanism will not be used. 00:12:57.841 [2024-11-27 11:51:23.969756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.841 [2024-11-27 11:51:24.084335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.100 [2024-11-27 11:51:24.281486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.100 [2024-11-27 11:51:24.281536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.358 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.358 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:58.358 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:58.358 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:58.358 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.358 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.358 BaseBdev1_malloc 00:12:58.358 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.358 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:58.358 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.358 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.358 [2024-11-27 11:51:24.693995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:58.358 [2024-11-27 11:51:24.694070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.358 [2024-11-27 11:51:24.694096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:58.358 [2024-11-27 11:51:24.694108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.358 [2024-11-27 11:51:24.696314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.358 [2024-11-27 11:51:24.696358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:58.358 BaseBdev1 00:12:58.358 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.358 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:58.358 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:58.358 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.358 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.617 BaseBdev2_malloc 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.617 [2024-11-27 11:51:24.749651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:58.617 [2024-11-27 11:51:24.749735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.617 [2024-11-27 11:51:24.749763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:58.617 [2024-11-27 11:51:24.749774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.617 [2024-11-27 11:51:24.752006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.617 [2024-11-27 11:51:24.752051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:58.617 BaseBdev2 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.617 spare_malloc 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.617 spare_delay 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.617 [2024-11-27 11:51:24.829225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:58.617 [2024-11-27 11:51:24.829285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.617 [2024-11-27 11:51:24.829322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:58.617 [2024-11-27 11:51:24.829333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.617 [2024-11-27 11:51:24.831549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.617 [2024-11-27 11:51:24.831588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:58.617 spare 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.617 [2024-11-27 11:51:24.841277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.617 [2024-11-27 11:51:24.843187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.617 [2024-11-27 11:51:24.843359] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:58.617 [2024-11-27 11:51:24.843382] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:58.617 [2024-11-27 11:51:24.843663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:58.617 [2024-11-27 11:51:24.843864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:58.617 [2024-11-27 11:51:24.843879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:58.617 [2024-11-27 11:51:24.844051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.617 "name": "raid_bdev1", 00:12:58.617 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:12:58.617 "strip_size_kb": 0, 00:12:58.617 "state": "online", 00:12:58.617 "raid_level": "raid1", 00:12:58.617 "superblock": true, 00:12:58.617 "num_base_bdevs": 2, 00:12:58.617 "num_base_bdevs_discovered": 2, 00:12:58.617 "num_base_bdevs_operational": 2, 00:12:58.617 "base_bdevs_list": [ 00:12:58.617 { 00:12:58.617 "name": "BaseBdev1", 00:12:58.617 "uuid": "1c0099ae-ec28-54c4-bfad-b487dcff6efa", 00:12:58.617 "is_configured": true, 00:12:58.617 "data_offset": 2048, 00:12:58.617 "data_size": 63488 00:12:58.617 }, 00:12:58.617 { 00:12:58.617 "name": "BaseBdev2", 00:12:58.617 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:12:58.617 "is_configured": true, 00:12:58.617 "data_offset": 2048, 00:12:58.617 "data_size": 63488 00:12:58.617 } 00:12:58.617 ] 00:12:58.617 }' 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.617 11:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:59.187 [2024-11-27 11:51:25.328763] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.187 [2024-11-27 11:51:25.428267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.187 "name": "raid_bdev1", 00:12:59.187 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:12:59.187 "strip_size_kb": 0, 00:12:59.187 "state": "online", 00:12:59.187 "raid_level": "raid1", 00:12:59.187 "superblock": true, 00:12:59.187 "num_base_bdevs": 2, 00:12:59.187 "num_base_bdevs_discovered": 1, 00:12:59.187 "num_base_bdevs_operational": 1, 00:12:59.187 "base_bdevs_list": [ 00:12:59.187 { 00:12:59.187 "name": null, 00:12:59.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.187 "is_configured": false, 00:12:59.187 "data_offset": 0, 00:12:59.187 "data_size": 63488 00:12:59.187 }, 00:12:59.187 { 00:12:59.187 "name": "BaseBdev2", 00:12:59.187 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:12:59.187 "is_configured": true, 00:12:59.187 "data_offset": 2048, 00:12:59.187 "data_size": 63488 00:12:59.187 } 00:12:59.187 ] 00:12:59.187 }' 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.187 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.187 [2024-11-27 11:51:25.512939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:59.187 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:59.187 Zero copy mechanism will not be used. 00:12:59.187 Running I/O for 60 seconds... 00:12:59.757 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:59.757 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.757 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.757 [2024-11-27 11:51:25.903216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.757 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.757 11:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:59.757 [2024-11-27 11:51:25.950657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:59.757 [2024-11-27 11:51:25.952604] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:59.757 [2024-11-27 11:51:26.059836] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:59.757 [2024-11-27 11:51:26.060434] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:00.016 [2024-11-27 11:51:26.181247] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:00.016 [2024-11-27 11:51:26.181623] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:00.276 176.00 IOPS, 528.00 MiB/s [2024-11-27T11:51:26.661Z] [2024-11-27 11:51:26.519780] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:00.276 [2024-11-27 11:51:26.520385] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:00.535 [2024-11-27 11:51:26.749981] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:00.535 [2024-11-27 11:51:26.750334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:00.795 11:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.795 11:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.795 11:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.795 11:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.795 11:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.795 11:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.795 11:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.795 11:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.795 11:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.795 11:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.795 11:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.795 "name": "raid_bdev1", 00:13:00.795 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:00.795 "strip_size_kb": 0, 00:13:00.795 "state": "online", 00:13:00.795 "raid_level": "raid1", 00:13:00.795 "superblock": true, 00:13:00.795 "num_base_bdevs": 2, 00:13:00.795 "num_base_bdevs_discovered": 2, 00:13:00.795 "num_base_bdevs_operational": 2, 00:13:00.795 "process": { 00:13:00.795 "type": "rebuild", 00:13:00.795 "target": "spare", 00:13:00.795 "progress": { 00:13:00.795 "blocks": 12288, 00:13:00.795 "percent": 19 00:13:00.795 } 00:13:00.795 }, 00:13:00.795 "base_bdevs_list": [ 00:13:00.795 { 00:13:00.795 "name": "spare", 00:13:00.795 "uuid": "7ef36b97-c5f1-580e-a71c-a0a0a99bd702", 00:13:00.795 "is_configured": true, 00:13:00.795 "data_offset": 2048, 00:13:00.795 "data_size": 63488 00:13:00.795 }, 00:13:00.795 { 00:13:00.795 "name": "BaseBdev2", 00:13:00.795 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:00.795 "is_configured": true, 00:13:00.795 "data_offset": 2048, 00:13:00.795 "data_size": 63488 00:13:00.795 } 00:13:00.795 ] 00:13:00.795 }' 00:13:00.795 [2024-11-27 11:51:27.001801] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:00.795 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.795 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.795 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.795 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.795 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:00.795 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.796 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.796 [2024-11-27 11:51:27.085971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:00.796 [2024-11-27 11:51:27.141476] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:00.796 [2024-11-27 11:51:27.149351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.796 [2024-11-27 11:51:27.149414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:00.796 [2024-11-27 11:51:27.149426] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:01.055 [2024-11-27 11:51:27.181498] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.055 "name": "raid_bdev1", 00:13:01.055 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:01.055 "strip_size_kb": 0, 00:13:01.055 "state": "online", 00:13:01.055 "raid_level": "raid1", 00:13:01.055 "superblock": true, 00:13:01.055 "num_base_bdevs": 2, 00:13:01.055 "num_base_bdevs_discovered": 1, 00:13:01.055 "num_base_bdevs_operational": 1, 00:13:01.055 "base_bdevs_list": [ 00:13:01.055 { 00:13:01.055 "name": null, 00:13:01.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.055 "is_configured": false, 00:13:01.055 "data_offset": 0, 00:13:01.055 "data_size": 63488 00:13:01.055 }, 00:13:01.055 { 00:13:01.055 "name": "BaseBdev2", 00:13:01.055 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:01.055 "is_configured": true, 00:13:01.055 "data_offset": 2048, 00:13:01.055 "data_size": 63488 00:13:01.055 } 00:13:01.055 ] 00:13:01.055 }' 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.055 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.314 176.00 IOPS, 528.00 MiB/s [2024-11-27T11:51:27.699Z] 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:01.314 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.314 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:01.314 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:01.315 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.315 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.315 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.315 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.315 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.315 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.315 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.315 "name": "raid_bdev1", 00:13:01.315 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:01.315 "strip_size_kb": 0, 00:13:01.315 "state": "online", 00:13:01.315 "raid_level": "raid1", 00:13:01.315 "superblock": true, 00:13:01.315 "num_base_bdevs": 2, 00:13:01.315 "num_base_bdevs_discovered": 1, 00:13:01.315 "num_base_bdevs_operational": 1, 00:13:01.315 "base_bdevs_list": [ 00:13:01.315 { 00:13:01.315 "name": null, 00:13:01.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.315 "is_configured": false, 00:13:01.315 "data_offset": 0, 00:13:01.315 "data_size": 63488 00:13:01.315 }, 00:13:01.315 { 00:13:01.315 "name": "BaseBdev2", 00:13:01.315 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:01.315 "is_configured": true, 00:13:01.315 "data_offset": 2048, 00:13:01.315 "data_size": 63488 00:13:01.315 } 00:13:01.315 ] 00:13:01.315 }' 00:13:01.315 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.573 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:01.573 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.573 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:01.573 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:01.573 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.573 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.573 [2024-11-27 11:51:27.776974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.573 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.573 11:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:01.573 [2024-11-27 11:51:27.837721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:01.573 [2024-11-27 11:51:27.839627] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:01.573 [2024-11-27 11:51:27.952770] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:01.573 [2024-11-27 11:51:27.953383] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:01.832 [2024-11-27 11:51:28.194195] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:01.832 [2024-11-27 11:51:28.194529] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:02.400 169.00 IOPS, 507.00 MiB/s [2024-11-27T11:51:28.785Z] [2024-11-27 11:51:28.546849] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.659 "name": "raid_bdev1", 00:13:02.659 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:02.659 "strip_size_kb": 0, 00:13:02.659 "state": "online", 00:13:02.659 "raid_level": "raid1", 00:13:02.659 "superblock": true, 00:13:02.659 "num_base_bdevs": 2, 00:13:02.659 "num_base_bdevs_discovered": 2, 00:13:02.659 "num_base_bdevs_operational": 2, 00:13:02.659 "process": { 00:13:02.659 "type": "rebuild", 00:13:02.659 "target": "spare", 00:13:02.659 "progress": { 00:13:02.659 "blocks": 10240, 00:13:02.659 "percent": 16 00:13:02.659 } 00:13:02.659 }, 00:13:02.659 "base_bdevs_list": [ 00:13:02.659 { 00:13:02.659 "name": "spare", 00:13:02.659 "uuid": "7ef36b97-c5f1-580e-a71c-a0a0a99bd702", 00:13:02.659 "is_configured": true, 00:13:02.659 "data_offset": 2048, 00:13:02.659 "data_size": 63488 00:13:02.659 }, 00:13:02.659 { 00:13:02.659 "name": "BaseBdev2", 00:13:02.659 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:02.659 "is_configured": true, 00:13:02.659 "data_offset": 2048, 00:13:02.659 "data_size": 63488 00:13:02.659 } 00:13:02.659 ] 00:13:02.659 }' 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:02.659 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=423 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.659 [2024-11-27 11:51:28.984434] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:02.659 11:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.659 11:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.659 "name": "raid_bdev1", 00:13:02.659 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:02.659 "strip_size_kb": 0, 00:13:02.659 "state": "online", 00:13:02.659 "raid_level": "raid1", 00:13:02.659 "superblock": true, 00:13:02.659 "num_base_bdevs": 2, 00:13:02.659 "num_base_bdevs_discovered": 2, 00:13:02.659 "num_base_bdevs_operational": 2, 00:13:02.659 "process": { 00:13:02.659 "type": "rebuild", 00:13:02.659 "target": "spare", 00:13:02.659 "progress": { 00:13:02.659 "blocks": 14336, 00:13:02.659 "percent": 22 00:13:02.659 } 00:13:02.659 }, 00:13:02.659 "base_bdevs_list": [ 00:13:02.659 { 00:13:02.659 "name": "spare", 00:13:02.659 "uuid": "7ef36b97-c5f1-580e-a71c-a0a0a99bd702", 00:13:02.659 "is_configured": true, 00:13:02.659 "data_offset": 2048, 00:13:02.659 "data_size": 63488 00:13:02.659 }, 00:13:02.659 { 00:13:02.659 "name": "BaseBdev2", 00:13:02.659 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:02.659 "is_configured": true, 00:13:02.659 "data_offset": 2048, 00:13:02.659 "data_size": 63488 00:13:02.659 } 00:13:02.659 ] 00:13:02.659 }' 00:13:02.659 11:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.937 11:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.937 11:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.937 11:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.937 11:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:02.937 [2024-11-27 11:51:29.209165] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:02.937 [2024-11-27 11:51:29.209530] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:03.455 144.75 IOPS, 434.25 MiB/s [2024-11-27T11:51:29.840Z] [2024-11-27 11:51:29.645409] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:03.714 [2024-11-27 11:51:29.868108] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:03.714 [2024-11-27 11:51:29.868565] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:03.714 [2024-11-27 11:51:29.994493] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:03.974 11:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:03.974 11:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.974 11:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.974 11:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.974 11:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.974 11:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.974 11:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.974 11:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.974 11:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.974 11:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.974 11:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.974 11:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.974 "name": "raid_bdev1", 00:13:03.974 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:03.974 "strip_size_kb": 0, 00:13:03.974 "state": "online", 00:13:03.974 "raid_level": "raid1", 00:13:03.974 "superblock": true, 00:13:03.974 "num_base_bdevs": 2, 00:13:03.974 "num_base_bdevs_discovered": 2, 00:13:03.974 "num_base_bdevs_operational": 2, 00:13:03.974 "process": { 00:13:03.974 "type": "rebuild", 00:13:03.974 "target": "spare", 00:13:03.974 "progress": { 00:13:03.974 "blocks": 28672, 00:13:03.974 "percent": 45 00:13:03.974 } 00:13:03.974 }, 00:13:03.974 "base_bdevs_list": [ 00:13:03.974 { 00:13:03.974 "name": "spare", 00:13:03.974 "uuid": "7ef36b97-c5f1-580e-a71c-a0a0a99bd702", 00:13:03.974 "is_configured": true, 00:13:03.974 "data_offset": 2048, 00:13:03.974 "data_size": 63488 00:13:03.974 }, 00:13:03.974 { 00:13:03.974 "name": "BaseBdev2", 00:13:03.974 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:03.974 "is_configured": true, 00:13:03.974 "data_offset": 2048, 00:13:03.974 "data_size": 63488 00:13:03.974 } 00:13:03.974 ] 00:13:03.974 }' 00:13:03.974 11:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.974 11:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.974 11:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.974 11:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.974 11:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:04.494 124.00 IOPS, 372.00 MiB/s [2024-11-27T11:51:30.879Z] [2024-11-27 11:51:30.698785] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:04.494 [2024-11-27 11:51:30.808122] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:04.494 [2024-11-27 11:51:30.808443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:05.063 [2024-11-27 11:51:31.137934] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:05.063 [2024-11-27 11:51:31.138590] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:05.063 11:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:05.063 11:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.063 11:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.063 11:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.063 11:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.063 11:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.063 11:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.063 11:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.064 11:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.064 11:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.064 11:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.064 11:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.064 "name": "raid_bdev1", 00:13:05.064 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:05.064 "strip_size_kb": 0, 00:13:05.064 "state": "online", 00:13:05.064 "raid_level": "raid1", 00:13:05.064 "superblock": true, 00:13:05.064 "num_base_bdevs": 2, 00:13:05.064 "num_base_bdevs_discovered": 2, 00:13:05.064 "num_base_bdevs_operational": 2, 00:13:05.064 "process": { 00:13:05.064 "type": "rebuild", 00:13:05.064 "target": "spare", 00:13:05.064 "progress": { 00:13:05.064 "blocks": 45056, 00:13:05.064 "percent": 70 00:13:05.064 } 00:13:05.064 }, 00:13:05.064 "base_bdevs_list": [ 00:13:05.064 { 00:13:05.064 "name": "spare", 00:13:05.064 "uuid": "7ef36b97-c5f1-580e-a71c-a0a0a99bd702", 00:13:05.064 "is_configured": true, 00:13:05.064 "data_offset": 2048, 00:13:05.064 "data_size": 63488 00:13:05.064 }, 00:13:05.064 { 00:13:05.064 "name": "BaseBdev2", 00:13:05.064 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:05.064 "is_configured": true, 00:13:05.064 "data_offset": 2048, 00:13:05.064 "data_size": 63488 00:13:05.064 } 00:13:05.064 ] 00:13:05.064 }' 00:13:05.064 11:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.064 [2024-11-27 11:51:31.354936] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:05.064 11:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.064 11:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.064 11:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.064 11:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:05.581 110.83 IOPS, 332.50 MiB/s [2024-11-27T11:51:31.966Z] [2024-11-27 11:51:31.792020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:05.841 [2024-11-27 11:51:32.134479] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:06.109 11:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:06.109 11:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.109 11:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.109 11:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.109 11:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.109 11:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.109 11:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.109 11:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.109 11:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.109 11:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.109 11:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.109 [2024-11-27 11:51:32.471255] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:06.109 11:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.109 "name": "raid_bdev1", 00:13:06.109 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:06.109 "strip_size_kb": 0, 00:13:06.109 "state": "online", 00:13:06.109 "raid_level": "raid1", 00:13:06.109 "superblock": true, 00:13:06.109 "num_base_bdevs": 2, 00:13:06.109 "num_base_bdevs_discovered": 2, 00:13:06.109 "num_base_bdevs_operational": 2, 00:13:06.109 "process": { 00:13:06.109 "type": "rebuild", 00:13:06.109 "target": "spare", 00:13:06.109 "progress": { 00:13:06.109 "blocks": 61440, 00:13:06.109 "percent": 96 00:13:06.109 } 00:13:06.109 }, 00:13:06.109 "base_bdevs_list": [ 00:13:06.109 { 00:13:06.109 "name": "spare", 00:13:06.109 "uuid": "7ef36b97-c5f1-580e-a71c-a0a0a99bd702", 00:13:06.109 "is_configured": true, 00:13:06.109 "data_offset": 2048, 00:13:06.109 "data_size": 63488 00:13:06.109 }, 00:13:06.109 { 00:13:06.109 "name": "BaseBdev2", 00:13:06.109 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:06.109 "is_configured": true, 00:13:06.109 "data_offset": 2048, 00:13:06.109 "data_size": 63488 00:13:06.109 } 00:13:06.109 ] 00:13:06.109 }' 00:13:06.109 11:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.367 99.71 IOPS, 299.14 MiB/s [2024-11-27T11:51:32.752Z] 11:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.367 11:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.367 [2024-11-27 11:51:32.571029] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:06.367 [2024-11-27 11:51:32.573450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.367 11:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.367 11:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:07.303 91.88 IOPS, 275.62 MiB/s [2024-11-27T11:51:33.688Z] 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:07.303 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.303 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.303 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.303 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.303 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.303 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.303 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.303 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.303 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.303 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.303 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.303 "name": "raid_bdev1", 00:13:07.303 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:07.303 "strip_size_kb": 0, 00:13:07.303 "state": "online", 00:13:07.303 "raid_level": "raid1", 00:13:07.303 "superblock": true, 00:13:07.303 "num_base_bdevs": 2, 00:13:07.303 "num_base_bdevs_discovered": 2, 00:13:07.303 "num_base_bdevs_operational": 2, 00:13:07.303 "base_bdevs_list": [ 00:13:07.303 { 00:13:07.303 "name": "spare", 00:13:07.303 "uuid": "7ef36b97-c5f1-580e-a71c-a0a0a99bd702", 00:13:07.303 "is_configured": true, 00:13:07.303 "data_offset": 2048, 00:13:07.303 "data_size": 63488 00:13:07.303 }, 00:13:07.303 { 00:13:07.303 "name": "BaseBdev2", 00:13:07.303 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:07.303 "is_configured": true, 00:13:07.303 "data_offset": 2048, 00:13:07.303 "data_size": 63488 00:13:07.303 } 00:13:07.303 ] 00:13:07.303 }' 00:13:07.303 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.562 "name": "raid_bdev1", 00:13:07.562 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:07.562 "strip_size_kb": 0, 00:13:07.562 "state": "online", 00:13:07.562 "raid_level": "raid1", 00:13:07.562 "superblock": true, 00:13:07.562 "num_base_bdevs": 2, 00:13:07.562 "num_base_bdevs_discovered": 2, 00:13:07.562 "num_base_bdevs_operational": 2, 00:13:07.562 "base_bdevs_list": [ 00:13:07.562 { 00:13:07.562 "name": "spare", 00:13:07.562 "uuid": "7ef36b97-c5f1-580e-a71c-a0a0a99bd702", 00:13:07.562 "is_configured": true, 00:13:07.562 "data_offset": 2048, 00:13:07.562 "data_size": 63488 00:13:07.562 }, 00:13:07.562 { 00:13:07.562 "name": "BaseBdev2", 00:13:07.562 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:07.562 "is_configured": true, 00:13:07.562 "data_offset": 2048, 00:13:07.562 "data_size": 63488 00:13:07.562 } 00:13:07.562 ] 00:13:07.562 }' 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.562 "name": "raid_bdev1", 00:13:07.562 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:07.562 "strip_size_kb": 0, 00:13:07.562 "state": "online", 00:13:07.562 "raid_level": "raid1", 00:13:07.562 "superblock": true, 00:13:07.562 "num_base_bdevs": 2, 00:13:07.562 "num_base_bdevs_discovered": 2, 00:13:07.562 "num_base_bdevs_operational": 2, 00:13:07.562 "base_bdevs_list": [ 00:13:07.562 { 00:13:07.562 "name": "spare", 00:13:07.562 "uuid": "7ef36b97-c5f1-580e-a71c-a0a0a99bd702", 00:13:07.562 "is_configured": true, 00:13:07.562 "data_offset": 2048, 00:13:07.562 "data_size": 63488 00:13:07.562 }, 00:13:07.562 { 00:13:07.562 "name": "BaseBdev2", 00:13:07.562 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:07.562 "is_configured": true, 00:13:07.562 "data_offset": 2048, 00:13:07.562 "data_size": 63488 00:13:07.562 } 00:13:07.562 ] 00:13:07.562 }' 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.562 11:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.821 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:07.821 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.821 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.821 [2024-11-27 11:51:34.190810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:07.821 [2024-11-27 11:51:34.190858] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:08.079 00:13:08.079 Latency(us) 00:13:08.079 [2024-11-27T11:51:34.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.079 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:08.079 raid_bdev1 : 8.74 87.30 261.89 0.00 0.00 16818.85 316.59 116304.94 00:13:08.079 [2024-11-27T11:51:34.464Z] =================================================================================================================== 00:13:08.079 [2024-11-27T11:51:34.465Z] Total : 87.30 261.89 0.00 0.00 16818.85 316.59 116304.94 00:13:08.080 [2024-11-27 11:51:34.260635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.080 [2024-11-27 11:51:34.260709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.080 [2024-11-27 11:51:34.260785] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.080 [2024-11-27 11:51:34.260797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:08.080 { 00:13:08.080 "results": [ 00:13:08.080 { 00:13:08.080 "job": "raid_bdev1", 00:13:08.080 "core_mask": "0x1", 00:13:08.080 "workload": "randrw", 00:13:08.080 "percentage": 50, 00:13:08.080 "status": "finished", 00:13:08.080 "queue_depth": 2, 00:13:08.080 "io_size": 3145728, 00:13:08.080 "runtime": 8.740382, 00:13:08.080 "iops": 87.2959557145214, 00:13:08.080 "mibps": 261.8878671435642, 00:13:08.080 "io_failed": 0, 00:13:08.080 "io_timeout": 0, 00:13:08.080 "avg_latency_us": 16818.84851225054, 00:13:08.080 "min_latency_us": 316.5903930131004, 00:13:08.080 "max_latency_us": 116304.93624454149 00:13:08.080 } 00:13:08.080 ], 00:13:08.080 "core_count": 1 00:13:08.080 } 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:08.080 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:08.339 /dev/nbd0 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.339 1+0 records in 00:13:08.339 1+0 records out 00:13:08.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445516 s, 9.2 MB/s 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:08.339 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:08.598 /dev/nbd1 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.598 1+0 records in 00:13:08.598 1+0 records out 00:13:08.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284854 s, 14.4 MB/s 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:08.598 11:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:08.857 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:08.857 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:08.857 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:08.857 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:08.857 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:08.857 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.857 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:09.115 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:09.115 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:09.115 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:09.115 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.115 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.115 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:09.115 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:09.115 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.115 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:09.115 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.115 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:09.115 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.115 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:09.115 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.115 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.374 [2024-11-27 11:51:35.549738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:09.374 [2024-11-27 11:51:35.549806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.374 [2024-11-27 11:51:35.549846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:09.374 [2024-11-27 11:51:35.549860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.374 [2024-11-27 11:51:35.552419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.374 [2024-11-27 11:51:35.552464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:09.374 [2024-11-27 11:51:35.552571] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:09.374 [2024-11-27 11:51:35.552642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:09.374 [2024-11-27 11:51:35.552826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.374 spare 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.374 [2024-11-27 11:51:35.652788] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:09.374 [2024-11-27 11:51:35.652858] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:09.374 [2024-11-27 11:51:35.653269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:09.374 [2024-11-27 11:51:35.653515] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:09.374 [2024-11-27 11:51:35.653548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:09.374 [2024-11-27 11:51:35.653776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.374 "name": "raid_bdev1", 00:13:09.374 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:09.374 "strip_size_kb": 0, 00:13:09.374 "state": "online", 00:13:09.374 "raid_level": "raid1", 00:13:09.374 "superblock": true, 00:13:09.374 "num_base_bdevs": 2, 00:13:09.374 "num_base_bdevs_discovered": 2, 00:13:09.374 "num_base_bdevs_operational": 2, 00:13:09.374 "base_bdevs_list": [ 00:13:09.374 { 00:13:09.374 "name": "spare", 00:13:09.374 "uuid": "7ef36b97-c5f1-580e-a71c-a0a0a99bd702", 00:13:09.374 "is_configured": true, 00:13:09.374 "data_offset": 2048, 00:13:09.374 "data_size": 63488 00:13:09.374 }, 00:13:09.374 { 00:13:09.374 "name": "BaseBdev2", 00:13:09.374 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:09.374 "is_configured": true, 00:13:09.374 "data_offset": 2048, 00:13:09.374 "data_size": 63488 00:13:09.374 } 00:13:09.374 ] 00:13:09.374 }' 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.374 11:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.940 "name": "raid_bdev1", 00:13:09.940 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:09.940 "strip_size_kb": 0, 00:13:09.940 "state": "online", 00:13:09.940 "raid_level": "raid1", 00:13:09.940 "superblock": true, 00:13:09.940 "num_base_bdevs": 2, 00:13:09.940 "num_base_bdevs_discovered": 2, 00:13:09.940 "num_base_bdevs_operational": 2, 00:13:09.940 "base_bdevs_list": [ 00:13:09.940 { 00:13:09.940 "name": "spare", 00:13:09.940 "uuid": "7ef36b97-c5f1-580e-a71c-a0a0a99bd702", 00:13:09.940 "is_configured": true, 00:13:09.940 "data_offset": 2048, 00:13:09.940 "data_size": 63488 00:13:09.940 }, 00:13:09.940 { 00:13:09.940 "name": "BaseBdev2", 00:13:09.940 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:09.940 "is_configured": true, 00:13:09.940 "data_offset": 2048, 00:13:09.940 "data_size": 63488 00:13:09.940 } 00:13:09.940 ] 00:13:09.940 }' 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.940 [2024-11-27 11:51:36.280849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.940 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.198 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.198 "name": "raid_bdev1", 00:13:10.198 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:10.198 "strip_size_kb": 0, 00:13:10.198 "state": "online", 00:13:10.198 "raid_level": "raid1", 00:13:10.198 "superblock": true, 00:13:10.198 "num_base_bdevs": 2, 00:13:10.198 "num_base_bdevs_discovered": 1, 00:13:10.198 "num_base_bdevs_operational": 1, 00:13:10.198 "base_bdevs_list": [ 00:13:10.198 { 00:13:10.198 "name": null, 00:13:10.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.198 "is_configured": false, 00:13:10.198 "data_offset": 0, 00:13:10.198 "data_size": 63488 00:13:10.198 }, 00:13:10.198 { 00:13:10.198 "name": "BaseBdev2", 00:13:10.198 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:10.198 "is_configured": true, 00:13:10.198 "data_offset": 2048, 00:13:10.198 "data_size": 63488 00:13:10.198 } 00:13:10.198 ] 00:13:10.198 }' 00:13:10.198 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.198 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.457 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:10.457 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.457 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.457 [2024-11-27 11:51:36.712219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:10.457 [2024-11-27 11:51:36.712457] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:10.457 [2024-11-27 11:51:36.712475] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:10.457 [2024-11-27 11:51:36.712521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:10.457 [2024-11-27 11:51:36.731895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:10.457 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.458 11:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:10.458 [2024-11-27 11:51:36.734088] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:11.394 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.394 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.394 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.394 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.394 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.394 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.394 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.394 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.394 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.394 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.652 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.652 "name": "raid_bdev1", 00:13:11.652 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:11.652 "strip_size_kb": 0, 00:13:11.652 "state": "online", 00:13:11.652 "raid_level": "raid1", 00:13:11.652 "superblock": true, 00:13:11.652 "num_base_bdevs": 2, 00:13:11.652 "num_base_bdevs_discovered": 2, 00:13:11.652 "num_base_bdevs_operational": 2, 00:13:11.652 "process": { 00:13:11.652 "type": "rebuild", 00:13:11.652 "target": "spare", 00:13:11.652 "progress": { 00:13:11.652 "blocks": 20480, 00:13:11.652 "percent": 32 00:13:11.652 } 00:13:11.652 }, 00:13:11.652 "base_bdevs_list": [ 00:13:11.652 { 00:13:11.652 "name": "spare", 00:13:11.652 "uuid": "7ef36b97-c5f1-580e-a71c-a0a0a99bd702", 00:13:11.652 "is_configured": true, 00:13:11.652 "data_offset": 2048, 00:13:11.652 "data_size": 63488 00:13:11.652 }, 00:13:11.652 { 00:13:11.652 "name": "BaseBdev2", 00:13:11.652 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:11.652 "is_configured": true, 00:13:11.652 "data_offset": 2048, 00:13:11.652 "data_size": 63488 00:13:11.652 } 00:13:11.652 ] 00:13:11.652 }' 00:13:11.652 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.652 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.652 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.652 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.652 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:11.652 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.652 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.652 [2024-11-27 11:51:37.890133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.652 [2024-11-27 11:51:37.940544] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:11.652 [2024-11-27 11:51:37.940617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.652 [2024-11-27 11:51:37.940638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.652 [2024-11-27 11:51:37.940647] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:11.652 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.652 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:11.652 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.652 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.653 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.653 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.653 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:11.653 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.653 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.653 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.653 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.653 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.653 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.653 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.653 11:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.653 11:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.911 11:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.911 "name": "raid_bdev1", 00:13:11.911 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:11.911 "strip_size_kb": 0, 00:13:11.911 "state": "online", 00:13:11.911 "raid_level": "raid1", 00:13:11.911 "superblock": true, 00:13:11.911 "num_base_bdevs": 2, 00:13:11.912 "num_base_bdevs_discovered": 1, 00:13:11.912 "num_base_bdevs_operational": 1, 00:13:11.912 "base_bdevs_list": [ 00:13:11.912 { 00:13:11.912 "name": null, 00:13:11.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.912 "is_configured": false, 00:13:11.912 "data_offset": 0, 00:13:11.912 "data_size": 63488 00:13:11.912 }, 00:13:11.912 { 00:13:11.912 "name": "BaseBdev2", 00:13:11.912 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:11.912 "is_configured": true, 00:13:11.912 "data_offset": 2048, 00:13:11.912 "data_size": 63488 00:13:11.912 } 00:13:11.912 ] 00:13:11.912 }' 00:13:11.912 11:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.912 11:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.171 11:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:12.171 11:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.171 11:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.171 [2024-11-27 11:51:38.460437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:12.171 [2024-11-27 11:51:38.460519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.171 [2024-11-27 11:51:38.460550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:12.171 [2024-11-27 11:51:38.460561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.171 [2024-11-27 11:51:38.461153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.171 [2024-11-27 11:51:38.461188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:12.171 [2024-11-27 11:51:38.461303] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:12.171 [2024-11-27 11:51:38.461324] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:12.171 [2024-11-27 11:51:38.461341] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:12.171 [2024-11-27 11:51:38.461371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.171 [2024-11-27 11:51:38.480901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:12.171 spare 00:13:12.171 11:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.171 11:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:12.171 [2024-11-27 11:51:38.483094] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:13.109 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.109 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.109 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.109 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.109 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.368 "name": "raid_bdev1", 00:13:13.368 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:13.368 "strip_size_kb": 0, 00:13:13.368 "state": "online", 00:13:13.368 "raid_level": "raid1", 00:13:13.368 "superblock": true, 00:13:13.368 "num_base_bdevs": 2, 00:13:13.368 "num_base_bdevs_discovered": 2, 00:13:13.368 "num_base_bdevs_operational": 2, 00:13:13.368 "process": { 00:13:13.368 "type": "rebuild", 00:13:13.368 "target": "spare", 00:13:13.368 "progress": { 00:13:13.368 "blocks": 20480, 00:13:13.368 "percent": 32 00:13:13.368 } 00:13:13.368 }, 00:13:13.368 "base_bdevs_list": [ 00:13:13.368 { 00:13:13.368 "name": "spare", 00:13:13.368 "uuid": "7ef36b97-c5f1-580e-a71c-a0a0a99bd702", 00:13:13.368 "is_configured": true, 00:13:13.368 "data_offset": 2048, 00:13:13.368 "data_size": 63488 00:13:13.368 }, 00:13:13.368 { 00:13:13.368 "name": "BaseBdev2", 00:13:13.368 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:13.368 "is_configured": true, 00:13:13.368 "data_offset": 2048, 00:13:13.368 "data_size": 63488 00:13:13.368 } 00:13:13.368 ] 00:13:13.368 }' 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.368 [2024-11-27 11:51:39.626279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.368 [2024-11-27 11:51:39.689267] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:13.368 [2024-11-27 11:51:39.689370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.368 [2024-11-27 11:51:39.689388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.368 [2024-11-27 11:51:39.689399] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.368 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.629 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.629 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.629 "name": "raid_bdev1", 00:13:13.629 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:13.629 "strip_size_kb": 0, 00:13:13.629 "state": "online", 00:13:13.629 "raid_level": "raid1", 00:13:13.629 "superblock": true, 00:13:13.629 "num_base_bdevs": 2, 00:13:13.629 "num_base_bdevs_discovered": 1, 00:13:13.629 "num_base_bdevs_operational": 1, 00:13:13.629 "base_bdevs_list": [ 00:13:13.629 { 00:13:13.629 "name": null, 00:13:13.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.629 "is_configured": false, 00:13:13.629 "data_offset": 0, 00:13:13.629 "data_size": 63488 00:13:13.629 }, 00:13:13.629 { 00:13:13.629 "name": "BaseBdev2", 00:13:13.629 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:13.629 "is_configured": true, 00:13:13.629 "data_offset": 2048, 00:13:13.629 "data_size": 63488 00:13:13.629 } 00:13:13.629 ] 00:13:13.629 }' 00:13:13.629 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.629 11:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.889 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:13.889 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.889 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:13.889 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:13.889 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.889 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.889 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.889 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.889 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.889 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.889 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.889 "name": "raid_bdev1", 00:13:13.889 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:13.889 "strip_size_kb": 0, 00:13:13.889 "state": "online", 00:13:13.889 "raid_level": "raid1", 00:13:13.889 "superblock": true, 00:13:13.889 "num_base_bdevs": 2, 00:13:13.889 "num_base_bdevs_discovered": 1, 00:13:13.889 "num_base_bdevs_operational": 1, 00:13:13.889 "base_bdevs_list": [ 00:13:13.889 { 00:13:13.889 "name": null, 00:13:13.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.889 "is_configured": false, 00:13:13.889 "data_offset": 0, 00:13:13.889 "data_size": 63488 00:13:13.889 }, 00:13:13.889 { 00:13:13.889 "name": "BaseBdev2", 00:13:13.889 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:13.889 "is_configured": true, 00:13:13.889 "data_offset": 2048, 00:13:13.889 "data_size": 63488 00:13:13.889 } 00:13:13.889 ] 00:13:13.889 }' 00:13:13.889 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.150 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:14.150 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.150 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:14.150 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:14.150 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.150 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.150 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.150 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:14.150 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.150 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.150 [2024-11-27 11:51:40.333760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:14.150 [2024-11-27 11:51:40.333859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.150 [2024-11-27 11:51:40.333891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:14.150 [2024-11-27 11:51:40.333906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.150 [2024-11-27 11:51:40.334426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.150 [2024-11-27 11:51:40.334461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:14.150 [2024-11-27 11:51:40.334556] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:14.150 [2024-11-27 11:51:40.334583] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:14.150 [2024-11-27 11:51:40.334592] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:14.150 [2024-11-27 11:51:40.334605] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:14.150 BaseBdev1 00:13:14.150 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.150 11:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:15.113 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:15.113 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.113 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.113 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.113 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.113 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:15.113 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.113 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.113 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.113 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.113 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.113 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.113 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.113 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.113 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.113 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.113 "name": "raid_bdev1", 00:13:15.113 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:15.113 "strip_size_kb": 0, 00:13:15.113 "state": "online", 00:13:15.113 "raid_level": "raid1", 00:13:15.113 "superblock": true, 00:13:15.113 "num_base_bdevs": 2, 00:13:15.113 "num_base_bdevs_discovered": 1, 00:13:15.113 "num_base_bdevs_operational": 1, 00:13:15.113 "base_bdevs_list": [ 00:13:15.113 { 00:13:15.113 "name": null, 00:13:15.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.113 "is_configured": false, 00:13:15.113 "data_offset": 0, 00:13:15.113 "data_size": 63488 00:13:15.113 }, 00:13:15.113 { 00:13:15.113 "name": "BaseBdev2", 00:13:15.113 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:15.113 "is_configured": true, 00:13:15.113 "data_offset": 2048, 00:13:15.113 "data_size": 63488 00:13:15.113 } 00:13:15.113 ] 00:13:15.113 }' 00:13:15.113 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.113 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.682 "name": "raid_bdev1", 00:13:15.682 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:15.682 "strip_size_kb": 0, 00:13:15.682 "state": "online", 00:13:15.682 "raid_level": "raid1", 00:13:15.682 "superblock": true, 00:13:15.682 "num_base_bdevs": 2, 00:13:15.682 "num_base_bdevs_discovered": 1, 00:13:15.682 "num_base_bdevs_operational": 1, 00:13:15.682 "base_bdevs_list": [ 00:13:15.682 { 00:13:15.682 "name": null, 00:13:15.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.682 "is_configured": false, 00:13:15.682 "data_offset": 0, 00:13:15.682 "data_size": 63488 00:13:15.682 }, 00:13:15.682 { 00:13:15.682 "name": "BaseBdev2", 00:13:15.682 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:15.682 "is_configured": true, 00:13:15.682 "data_offset": 2048, 00:13:15.682 "data_size": 63488 00:13:15.682 } 00:13:15.682 ] 00:13:15.682 }' 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.682 [2024-11-27 11:51:41.931277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.682 [2024-11-27 11:51:41.931480] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:15.682 [2024-11-27 11:51:41.931495] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:15.682 request: 00:13:15.682 { 00:13:15.682 "base_bdev": "BaseBdev1", 00:13:15.682 "raid_bdev": "raid_bdev1", 00:13:15.682 "method": "bdev_raid_add_base_bdev", 00:13:15.682 "req_id": 1 00:13:15.682 } 00:13:15.682 Got JSON-RPC error response 00:13:15.682 response: 00:13:15.682 { 00:13:15.682 "code": -22, 00:13:15.682 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:15.682 } 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:15.682 11:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:16.622 11:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:16.622 11:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.622 11:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.622 11:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.622 11:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.622 11:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:16.622 11:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.622 11:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.622 11:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.622 11:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.622 11:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.622 11:51:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.622 11:51:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.622 11:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.622 11:51:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.622 11:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.622 "name": "raid_bdev1", 00:13:16.622 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:16.622 "strip_size_kb": 0, 00:13:16.622 "state": "online", 00:13:16.622 "raid_level": "raid1", 00:13:16.622 "superblock": true, 00:13:16.622 "num_base_bdevs": 2, 00:13:16.622 "num_base_bdevs_discovered": 1, 00:13:16.622 "num_base_bdevs_operational": 1, 00:13:16.622 "base_bdevs_list": [ 00:13:16.622 { 00:13:16.622 "name": null, 00:13:16.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.622 "is_configured": false, 00:13:16.622 "data_offset": 0, 00:13:16.622 "data_size": 63488 00:13:16.622 }, 00:13:16.622 { 00:13:16.622 "name": "BaseBdev2", 00:13:16.622 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:16.622 "is_configured": true, 00:13:16.622 "data_offset": 2048, 00:13:16.622 "data_size": 63488 00:13:16.622 } 00:13:16.622 ] 00:13:16.622 }' 00:13:16.622 11:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.622 11:51:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.191 "name": "raid_bdev1", 00:13:17.191 "uuid": "8c46d52f-11c1-4f94-8473-05226bb1c141", 00:13:17.191 "strip_size_kb": 0, 00:13:17.191 "state": "online", 00:13:17.191 "raid_level": "raid1", 00:13:17.191 "superblock": true, 00:13:17.191 "num_base_bdevs": 2, 00:13:17.191 "num_base_bdevs_discovered": 1, 00:13:17.191 "num_base_bdevs_operational": 1, 00:13:17.191 "base_bdevs_list": [ 00:13:17.191 { 00:13:17.191 "name": null, 00:13:17.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.191 "is_configured": false, 00:13:17.191 "data_offset": 0, 00:13:17.191 "data_size": 63488 00:13:17.191 }, 00:13:17.191 { 00:13:17.191 "name": "BaseBdev2", 00:13:17.191 "uuid": "762ab0ce-206c-57dc-925c-48344c4703a9", 00:13:17.191 "is_configured": true, 00:13:17.191 "data_offset": 2048, 00:13:17.191 "data_size": 63488 00:13:17.191 } 00:13:17.191 ] 00:13:17.191 }' 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76849 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76849 ']' 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76849 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76849 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.191 killing process with pid 76849 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76849' 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76849 00:13:17.191 Received shutdown signal, test time was about 18.063431 seconds 00:13:17.191 00:13:17.191 Latency(us) 00:13:17.191 [2024-11-27T11:51:43.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.191 [2024-11-27T11:51:43.576Z] =================================================================================================================== 00:13:17.191 [2024-11-27T11:51:43.576Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:17.191 [2024-11-27 11:51:43.543776] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:17.191 [2024-11-27 11:51:43.543934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.191 11:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76849 00:13:17.191 [2024-11-27 11:51:43.544008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:17.191 [2024-11-27 11:51:43.544020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:17.451 [2024-11-27 11:51:43.784506] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:18.831 00:13:18.831 real 0m21.324s 00:13:18.831 user 0m27.678s 00:13:18.831 sys 0m2.149s 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.831 ************************************ 00:13:18.831 END TEST raid_rebuild_test_sb_io 00:13:18.831 ************************************ 00:13:18.831 11:51:45 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:18.831 11:51:45 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:18.831 11:51:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:18.831 11:51:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.831 11:51:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:18.831 ************************************ 00:13:18.831 START TEST raid_rebuild_test 00:13:18.831 ************************************ 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77557 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77557 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77557 ']' 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.831 11:51:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.831 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:18.831 Zero copy mechanism will not be used. 00:13:18.831 [2024-11-27 11:51:45.211315] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:13:18.831 [2024-11-27 11:51:45.211455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77557 ] 00:13:19.091 [2024-11-27 11:51:45.386965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.351 [2024-11-27 11:51:45.507793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.351 [2024-11-27 11:51:45.711790] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.351 [2024-11-27 11:51:45.711861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.920 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:19.920 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:19.920 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.920 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:19.920 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.920 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.920 BaseBdev1_malloc 00:13:19.920 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.920 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:19.920 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.920 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.920 [2024-11-27 11:51:46.115085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:19.920 [2024-11-27 11:51:46.115259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.921 [2024-11-27 11:51:46.115311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:19.921 [2024-11-27 11:51:46.115357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.921 [2024-11-27 11:51:46.117775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.921 [2024-11-27 11:51:46.117889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:19.921 BaseBdev1 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.921 BaseBdev2_malloc 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.921 [2024-11-27 11:51:46.173528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:19.921 [2024-11-27 11:51:46.173647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.921 [2024-11-27 11:51:46.173694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:19.921 [2024-11-27 11:51:46.173755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.921 [2024-11-27 11:51:46.176131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.921 BaseBdev2 00:13:19.921 [2024-11-27 11:51:46.176216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.921 BaseBdev3_malloc 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.921 [2024-11-27 11:51:46.241636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:19.921 [2024-11-27 11:51:46.241738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.921 [2024-11-27 11:51:46.241799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:19.921 [2024-11-27 11:51:46.241847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.921 [2024-11-27 11:51:46.244118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.921 BaseBdev3 00:13:19.921 [2024-11-27 11:51:46.244199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.921 BaseBdev4_malloc 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.921 [2024-11-27 11:51:46.297123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:19.921 [2024-11-27 11:51:46.297266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.921 [2024-11-27 11:51:46.297313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:19.921 [2024-11-27 11:51:46.297349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.921 [2024-11-27 11:51:46.299769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.921 [2024-11-27 11:51:46.299860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:19.921 BaseBdev4 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.921 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.181 spare_malloc 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.181 spare_delay 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.181 [2024-11-27 11:51:46.362550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:20.181 [2024-11-27 11:51:46.362723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.181 [2024-11-27 11:51:46.362785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:20.181 [2024-11-27 11:51:46.362857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.181 [2024-11-27 11:51:46.365506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.181 [2024-11-27 11:51:46.365618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:20.181 spare 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.181 [2024-11-27 11:51:46.374587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:20.181 [2024-11-27 11:51:46.376607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:20.181 [2024-11-27 11:51:46.376740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.181 [2024-11-27 11:51:46.376828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:20.181 [2024-11-27 11:51:46.376972] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:20.181 [2024-11-27 11:51:46.377023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:20.181 [2024-11-27 11:51:46.377380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:20.181 [2024-11-27 11:51:46.377618] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:20.181 [2024-11-27 11:51:46.377665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:20.181 [2024-11-27 11:51:46.377887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.181 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.181 "name": "raid_bdev1", 00:13:20.181 "uuid": "20dd19db-4d33-4d93-98c6-b0cb7c63dcf5", 00:13:20.181 "strip_size_kb": 0, 00:13:20.181 "state": "online", 00:13:20.181 "raid_level": "raid1", 00:13:20.181 "superblock": false, 00:13:20.181 "num_base_bdevs": 4, 00:13:20.181 "num_base_bdevs_discovered": 4, 00:13:20.181 "num_base_bdevs_operational": 4, 00:13:20.181 "base_bdevs_list": [ 00:13:20.181 { 00:13:20.181 "name": "BaseBdev1", 00:13:20.181 "uuid": "30547ff4-63e0-562e-9d41-fdc10a3bdec1", 00:13:20.181 "is_configured": true, 00:13:20.181 "data_offset": 0, 00:13:20.181 "data_size": 65536 00:13:20.181 }, 00:13:20.181 { 00:13:20.181 "name": "BaseBdev2", 00:13:20.181 "uuid": "82e62afe-ba5c-5588-8be3-e360932b57f6", 00:13:20.181 "is_configured": true, 00:13:20.181 "data_offset": 0, 00:13:20.181 "data_size": 65536 00:13:20.181 }, 00:13:20.181 { 00:13:20.181 "name": "BaseBdev3", 00:13:20.182 "uuid": "b957fe3e-d827-59f7-b8ba-4f35f1951436", 00:13:20.182 "is_configured": true, 00:13:20.182 "data_offset": 0, 00:13:20.182 "data_size": 65536 00:13:20.182 }, 00:13:20.182 { 00:13:20.182 "name": "BaseBdev4", 00:13:20.182 "uuid": "f97dd386-1dbe-5c9a-b2d5-48fcbeb80007", 00:13:20.182 "is_configured": true, 00:13:20.182 "data_offset": 0, 00:13:20.182 "data_size": 65536 00:13:20.182 } 00:13:20.182 ] 00:13:20.182 }' 00:13:20.182 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.182 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.751 [2024-11-27 11:51:46.874064] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:20.751 11:51:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:21.010 [2024-11-27 11:51:47.173253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:21.010 /dev/nbd0 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.010 1+0 records in 00:13:21.010 1+0 records out 00:13:21.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366921 s, 11.2 MB/s 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:21.010 11:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:27.580 65536+0 records in 00:13:27.580 65536+0 records out 00:13:27.580 33554432 bytes (34 MB, 32 MiB) copied, 5.50856 s, 6.1 MB/s 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:27.580 [2024-11-27 11:51:52.979866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.580 [2024-11-27 11:51:52.995955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:27.580 11:51:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.580 11:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:27.580 11:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.580 11:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.580 11:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.580 11:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.580 11:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.580 11:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.581 11:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.581 11:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.581 11:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.581 11:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.581 11:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.581 11:51:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.581 11:51:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.581 11:51:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.581 11:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.581 "name": "raid_bdev1", 00:13:27.581 "uuid": "20dd19db-4d33-4d93-98c6-b0cb7c63dcf5", 00:13:27.581 "strip_size_kb": 0, 00:13:27.581 "state": "online", 00:13:27.581 "raid_level": "raid1", 00:13:27.581 "superblock": false, 00:13:27.581 "num_base_bdevs": 4, 00:13:27.581 "num_base_bdevs_discovered": 3, 00:13:27.581 "num_base_bdevs_operational": 3, 00:13:27.581 "base_bdevs_list": [ 00:13:27.581 { 00:13:27.581 "name": null, 00:13:27.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.581 "is_configured": false, 00:13:27.581 "data_offset": 0, 00:13:27.581 "data_size": 65536 00:13:27.581 }, 00:13:27.581 { 00:13:27.581 "name": "BaseBdev2", 00:13:27.581 "uuid": "82e62afe-ba5c-5588-8be3-e360932b57f6", 00:13:27.581 "is_configured": true, 00:13:27.581 "data_offset": 0, 00:13:27.581 "data_size": 65536 00:13:27.581 }, 00:13:27.581 { 00:13:27.581 "name": "BaseBdev3", 00:13:27.581 "uuid": "b957fe3e-d827-59f7-b8ba-4f35f1951436", 00:13:27.581 "is_configured": true, 00:13:27.581 "data_offset": 0, 00:13:27.581 "data_size": 65536 00:13:27.581 }, 00:13:27.581 { 00:13:27.581 "name": "BaseBdev4", 00:13:27.581 "uuid": "f97dd386-1dbe-5c9a-b2d5-48fcbeb80007", 00:13:27.581 "is_configured": true, 00:13:27.581 "data_offset": 0, 00:13:27.581 "data_size": 65536 00:13:27.581 } 00:13:27.581 ] 00:13:27.581 }' 00:13:27.581 11:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.581 11:51:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.581 11:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:27.581 11:51:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.581 11:51:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.581 [2024-11-27 11:51:53.435204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:27.581 [2024-11-27 11:51:53.450954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:27.581 11:51:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.581 11:51:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:27.581 [2024-11-27 11:51:53.452925] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:28.150 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.150 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.150 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.150 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.150 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.150 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.150 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.150 11:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.150 11:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.150 11:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.150 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.150 "name": "raid_bdev1", 00:13:28.150 "uuid": "20dd19db-4d33-4d93-98c6-b0cb7c63dcf5", 00:13:28.150 "strip_size_kb": 0, 00:13:28.150 "state": "online", 00:13:28.150 "raid_level": "raid1", 00:13:28.150 "superblock": false, 00:13:28.150 "num_base_bdevs": 4, 00:13:28.150 "num_base_bdevs_discovered": 4, 00:13:28.150 "num_base_bdevs_operational": 4, 00:13:28.150 "process": { 00:13:28.150 "type": "rebuild", 00:13:28.150 "target": "spare", 00:13:28.150 "progress": { 00:13:28.150 "blocks": 20480, 00:13:28.150 "percent": 31 00:13:28.150 } 00:13:28.150 }, 00:13:28.150 "base_bdevs_list": [ 00:13:28.150 { 00:13:28.150 "name": "spare", 00:13:28.150 "uuid": "29ce630f-1808-5363-b218-19c7aea1e37d", 00:13:28.150 "is_configured": true, 00:13:28.151 "data_offset": 0, 00:13:28.151 "data_size": 65536 00:13:28.151 }, 00:13:28.151 { 00:13:28.151 "name": "BaseBdev2", 00:13:28.151 "uuid": "82e62afe-ba5c-5588-8be3-e360932b57f6", 00:13:28.151 "is_configured": true, 00:13:28.151 "data_offset": 0, 00:13:28.151 "data_size": 65536 00:13:28.151 }, 00:13:28.151 { 00:13:28.151 "name": "BaseBdev3", 00:13:28.151 "uuid": "b957fe3e-d827-59f7-b8ba-4f35f1951436", 00:13:28.151 "is_configured": true, 00:13:28.151 "data_offset": 0, 00:13:28.151 "data_size": 65536 00:13:28.151 }, 00:13:28.151 { 00:13:28.151 "name": "BaseBdev4", 00:13:28.151 "uuid": "f97dd386-1dbe-5c9a-b2d5-48fcbeb80007", 00:13:28.151 "is_configured": true, 00:13:28.151 "data_offset": 0, 00:13:28.151 "data_size": 65536 00:13:28.151 } 00:13:28.151 ] 00:13:28.151 }' 00:13:28.151 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.410 [2024-11-27 11:51:54.588342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.410 [2024-11-27 11:51:54.658642] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:28.410 [2024-11-27 11:51:54.658829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.410 [2024-11-27 11:51:54.658877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.410 [2024-11-27 11:51:54.658902] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.410 "name": "raid_bdev1", 00:13:28.410 "uuid": "20dd19db-4d33-4d93-98c6-b0cb7c63dcf5", 00:13:28.410 "strip_size_kb": 0, 00:13:28.410 "state": "online", 00:13:28.410 "raid_level": "raid1", 00:13:28.410 "superblock": false, 00:13:28.410 "num_base_bdevs": 4, 00:13:28.410 "num_base_bdevs_discovered": 3, 00:13:28.410 "num_base_bdevs_operational": 3, 00:13:28.410 "base_bdevs_list": [ 00:13:28.410 { 00:13:28.410 "name": null, 00:13:28.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.410 "is_configured": false, 00:13:28.410 "data_offset": 0, 00:13:28.410 "data_size": 65536 00:13:28.410 }, 00:13:28.410 { 00:13:28.410 "name": "BaseBdev2", 00:13:28.410 "uuid": "82e62afe-ba5c-5588-8be3-e360932b57f6", 00:13:28.410 "is_configured": true, 00:13:28.410 "data_offset": 0, 00:13:28.410 "data_size": 65536 00:13:28.410 }, 00:13:28.410 { 00:13:28.410 "name": "BaseBdev3", 00:13:28.410 "uuid": "b957fe3e-d827-59f7-b8ba-4f35f1951436", 00:13:28.410 "is_configured": true, 00:13:28.410 "data_offset": 0, 00:13:28.410 "data_size": 65536 00:13:28.410 }, 00:13:28.410 { 00:13:28.410 "name": "BaseBdev4", 00:13:28.410 "uuid": "f97dd386-1dbe-5c9a-b2d5-48fcbeb80007", 00:13:28.410 "is_configured": true, 00:13:28.410 "data_offset": 0, 00:13:28.410 "data_size": 65536 00:13:28.410 } 00:13:28.410 ] 00:13:28.410 }' 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.410 11:51:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.979 "name": "raid_bdev1", 00:13:28.979 "uuid": "20dd19db-4d33-4d93-98c6-b0cb7c63dcf5", 00:13:28.979 "strip_size_kb": 0, 00:13:28.979 "state": "online", 00:13:28.979 "raid_level": "raid1", 00:13:28.979 "superblock": false, 00:13:28.979 "num_base_bdevs": 4, 00:13:28.979 "num_base_bdevs_discovered": 3, 00:13:28.979 "num_base_bdevs_operational": 3, 00:13:28.979 "base_bdevs_list": [ 00:13:28.979 { 00:13:28.979 "name": null, 00:13:28.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.979 "is_configured": false, 00:13:28.979 "data_offset": 0, 00:13:28.979 "data_size": 65536 00:13:28.979 }, 00:13:28.979 { 00:13:28.979 "name": "BaseBdev2", 00:13:28.979 "uuid": "82e62afe-ba5c-5588-8be3-e360932b57f6", 00:13:28.979 "is_configured": true, 00:13:28.979 "data_offset": 0, 00:13:28.979 "data_size": 65536 00:13:28.979 }, 00:13:28.979 { 00:13:28.979 "name": "BaseBdev3", 00:13:28.979 "uuid": "b957fe3e-d827-59f7-b8ba-4f35f1951436", 00:13:28.979 "is_configured": true, 00:13:28.979 "data_offset": 0, 00:13:28.979 "data_size": 65536 00:13:28.979 }, 00:13:28.979 { 00:13:28.979 "name": "BaseBdev4", 00:13:28.979 "uuid": "f97dd386-1dbe-5c9a-b2d5-48fcbeb80007", 00:13:28.979 "is_configured": true, 00:13:28.979 "data_offset": 0, 00:13:28.979 "data_size": 65536 00:13:28.979 } 00:13:28.979 ] 00:13:28.979 }' 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.979 [2024-11-27 11:51:55.285153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:28.979 [2024-11-27 11:51:55.298795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.979 11:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:28.979 [2024-11-27 11:51:55.300750] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:29.939 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.940 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.940 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.940 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.940 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.199 "name": "raid_bdev1", 00:13:30.199 "uuid": "20dd19db-4d33-4d93-98c6-b0cb7c63dcf5", 00:13:30.199 "strip_size_kb": 0, 00:13:30.199 "state": "online", 00:13:30.199 "raid_level": "raid1", 00:13:30.199 "superblock": false, 00:13:30.199 "num_base_bdevs": 4, 00:13:30.199 "num_base_bdevs_discovered": 4, 00:13:30.199 "num_base_bdevs_operational": 4, 00:13:30.199 "process": { 00:13:30.199 "type": "rebuild", 00:13:30.199 "target": "spare", 00:13:30.199 "progress": { 00:13:30.199 "blocks": 20480, 00:13:30.199 "percent": 31 00:13:30.199 } 00:13:30.199 }, 00:13:30.199 "base_bdevs_list": [ 00:13:30.199 { 00:13:30.199 "name": "spare", 00:13:30.199 "uuid": "29ce630f-1808-5363-b218-19c7aea1e37d", 00:13:30.199 "is_configured": true, 00:13:30.199 "data_offset": 0, 00:13:30.199 "data_size": 65536 00:13:30.199 }, 00:13:30.199 { 00:13:30.199 "name": "BaseBdev2", 00:13:30.199 "uuid": "82e62afe-ba5c-5588-8be3-e360932b57f6", 00:13:30.199 "is_configured": true, 00:13:30.199 "data_offset": 0, 00:13:30.199 "data_size": 65536 00:13:30.199 }, 00:13:30.199 { 00:13:30.199 "name": "BaseBdev3", 00:13:30.199 "uuid": "b957fe3e-d827-59f7-b8ba-4f35f1951436", 00:13:30.199 "is_configured": true, 00:13:30.199 "data_offset": 0, 00:13:30.199 "data_size": 65536 00:13:30.199 }, 00:13:30.199 { 00:13:30.199 "name": "BaseBdev4", 00:13:30.199 "uuid": "f97dd386-1dbe-5c9a-b2d5-48fcbeb80007", 00:13:30.199 "is_configured": true, 00:13:30.199 "data_offset": 0, 00:13:30.199 "data_size": 65536 00:13:30.199 } 00:13:30.199 ] 00:13:30.199 }' 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.199 [2024-11-27 11:51:56.452355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:30.199 [2024-11-27 11:51:56.506743] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.199 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.199 "name": "raid_bdev1", 00:13:30.199 "uuid": "20dd19db-4d33-4d93-98c6-b0cb7c63dcf5", 00:13:30.199 "strip_size_kb": 0, 00:13:30.199 "state": "online", 00:13:30.199 "raid_level": "raid1", 00:13:30.199 "superblock": false, 00:13:30.199 "num_base_bdevs": 4, 00:13:30.199 "num_base_bdevs_discovered": 3, 00:13:30.199 "num_base_bdevs_operational": 3, 00:13:30.199 "process": { 00:13:30.199 "type": "rebuild", 00:13:30.199 "target": "spare", 00:13:30.199 "progress": { 00:13:30.199 "blocks": 24576, 00:13:30.199 "percent": 37 00:13:30.199 } 00:13:30.199 }, 00:13:30.199 "base_bdevs_list": [ 00:13:30.199 { 00:13:30.199 "name": "spare", 00:13:30.199 "uuid": "29ce630f-1808-5363-b218-19c7aea1e37d", 00:13:30.199 "is_configured": true, 00:13:30.199 "data_offset": 0, 00:13:30.199 "data_size": 65536 00:13:30.199 }, 00:13:30.199 { 00:13:30.199 "name": null, 00:13:30.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.199 "is_configured": false, 00:13:30.199 "data_offset": 0, 00:13:30.200 "data_size": 65536 00:13:30.200 }, 00:13:30.200 { 00:13:30.200 "name": "BaseBdev3", 00:13:30.200 "uuid": "b957fe3e-d827-59f7-b8ba-4f35f1951436", 00:13:30.200 "is_configured": true, 00:13:30.200 "data_offset": 0, 00:13:30.200 "data_size": 65536 00:13:30.200 }, 00:13:30.200 { 00:13:30.200 "name": "BaseBdev4", 00:13:30.200 "uuid": "f97dd386-1dbe-5c9a-b2d5-48fcbeb80007", 00:13:30.200 "is_configured": true, 00:13:30.200 "data_offset": 0, 00:13:30.200 "data_size": 65536 00:13:30.200 } 00:13:30.200 ] 00:13:30.200 }' 00:13:30.200 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.458 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.458 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.458 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.458 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=451 00:13:30.458 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:30.458 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.458 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.458 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.458 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.458 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.458 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.458 11:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.458 11:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.458 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.458 11:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.458 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.458 "name": "raid_bdev1", 00:13:30.458 "uuid": "20dd19db-4d33-4d93-98c6-b0cb7c63dcf5", 00:13:30.458 "strip_size_kb": 0, 00:13:30.458 "state": "online", 00:13:30.458 "raid_level": "raid1", 00:13:30.458 "superblock": false, 00:13:30.458 "num_base_bdevs": 4, 00:13:30.458 "num_base_bdevs_discovered": 3, 00:13:30.459 "num_base_bdevs_operational": 3, 00:13:30.459 "process": { 00:13:30.459 "type": "rebuild", 00:13:30.459 "target": "spare", 00:13:30.459 "progress": { 00:13:30.459 "blocks": 26624, 00:13:30.459 "percent": 40 00:13:30.459 } 00:13:30.459 }, 00:13:30.459 "base_bdevs_list": [ 00:13:30.459 { 00:13:30.459 "name": "spare", 00:13:30.459 "uuid": "29ce630f-1808-5363-b218-19c7aea1e37d", 00:13:30.459 "is_configured": true, 00:13:30.459 "data_offset": 0, 00:13:30.459 "data_size": 65536 00:13:30.459 }, 00:13:30.459 { 00:13:30.459 "name": null, 00:13:30.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.459 "is_configured": false, 00:13:30.459 "data_offset": 0, 00:13:30.459 "data_size": 65536 00:13:30.459 }, 00:13:30.459 { 00:13:30.459 "name": "BaseBdev3", 00:13:30.459 "uuid": "b957fe3e-d827-59f7-b8ba-4f35f1951436", 00:13:30.459 "is_configured": true, 00:13:30.459 "data_offset": 0, 00:13:30.459 "data_size": 65536 00:13:30.459 }, 00:13:30.459 { 00:13:30.459 "name": "BaseBdev4", 00:13:30.459 "uuid": "f97dd386-1dbe-5c9a-b2d5-48fcbeb80007", 00:13:30.459 "is_configured": true, 00:13:30.459 "data_offset": 0, 00:13:30.459 "data_size": 65536 00:13:30.459 } 00:13:30.459 ] 00:13:30.459 }' 00:13:30.459 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.459 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.459 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.459 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.459 11:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:31.397 11:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:31.397 11:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.397 11:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.397 11:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.397 11:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.397 11:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.397 11:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.397 11:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.397 11:51:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.397 11:51:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.397 11:51:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.656 11:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.656 "name": "raid_bdev1", 00:13:31.656 "uuid": "20dd19db-4d33-4d93-98c6-b0cb7c63dcf5", 00:13:31.656 "strip_size_kb": 0, 00:13:31.656 "state": "online", 00:13:31.656 "raid_level": "raid1", 00:13:31.656 "superblock": false, 00:13:31.656 "num_base_bdevs": 4, 00:13:31.656 "num_base_bdevs_discovered": 3, 00:13:31.656 "num_base_bdevs_operational": 3, 00:13:31.656 "process": { 00:13:31.656 "type": "rebuild", 00:13:31.656 "target": "spare", 00:13:31.656 "progress": { 00:13:31.656 "blocks": 49152, 00:13:31.656 "percent": 75 00:13:31.656 } 00:13:31.656 }, 00:13:31.656 "base_bdevs_list": [ 00:13:31.656 { 00:13:31.656 "name": "spare", 00:13:31.656 "uuid": "29ce630f-1808-5363-b218-19c7aea1e37d", 00:13:31.656 "is_configured": true, 00:13:31.656 "data_offset": 0, 00:13:31.656 "data_size": 65536 00:13:31.656 }, 00:13:31.656 { 00:13:31.656 "name": null, 00:13:31.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.656 "is_configured": false, 00:13:31.656 "data_offset": 0, 00:13:31.656 "data_size": 65536 00:13:31.656 }, 00:13:31.656 { 00:13:31.656 "name": "BaseBdev3", 00:13:31.656 "uuid": "b957fe3e-d827-59f7-b8ba-4f35f1951436", 00:13:31.656 "is_configured": true, 00:13:31.656 "data_offset": 0, 00:13:31.656 "data_size": 65536 00:13:31.656 }, 00:13:31.656 { 00:13:31.656 "name": "BaseBdev4", 00:13:31.656 "uuid": "f97dd386-1dbe-5c9a-b2d5-48fcbeb80007", 00:13:31.656 "is_configured": true, 00:13:31.656 "data_offset": 0, 00:13:31.656 "data_size": 65536 00:13:31.656 } 00:13:31.656 ] 00:13:31.656 }' 00:13:31.656 11:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.656 11:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.656 11:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.656 11:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.656 11:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:32.223 [2024-11-27 11:51:58.516103] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:32.223 [2024-11-27 11:51:58.516189] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:32.223 [2024-11-27 11:51:58.516234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.790 11:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:32.790 11:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.790 11:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.790 11:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.790 11:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.790 11:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.790 11:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.790 11:51:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.790 11:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.790 11:51:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.790 11:51:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.790 11:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.790 "name": "raid_bdev1", 00:13:32.790 "uuid": "20dd19db-4d33-4d93-98c6-b0cb7c63dcf5", 00:13:32.790 "strip_size_kb": 0, 00:13:32.790 "state": "online", 00:13:32.790 "raid_level": "raid1", 00:13:32.790 "superblock": false, 00:13:32.790 "num_base_bdevs": 4, 00:13:32.790 "num_base_bdevs_discovered": 3, 00:13:32.790 "num_base_bdevs_operational": 3, 00:13:32.790 "base_bdevs_list": [ 00:13:32.790 { 00:13:32.790 "name": "spare", 00:13:32.790 "uuid": "29ce630f-1808-5363-b218-19c7aea1e37d", 00:13:32.790 "is_configured": true, 00:13:32.790 "data_offset": 0, 00:13:32.790 "data_size": 65536 00:13:32.790 }, 00:13:32.790 { 00:13:32.790 "name": null, 00:13:32.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.790 "is_configured": false, 00:13:32.790 "data_offset": 0, 00:13:32.790 "data_size": 65536 00:13:32.790 }, 00:13:32.790 { 00:13:32.790 "name": "BaseBdev3", 00:13:32.790 "uuid": "b957fe3e-d827-59f7-b8ba-4f35f1951436", 00:13:32.790 "is_configured": true, 00:13:32.790 "data_offset": 0, 00:13:32.790 "data_size": 65536 00:13:32.790 }, 00:13:32.790 { 00:13:32.790 "name": "BaseBdev4", 00:13:32.790 "uuid": "f97dd386-1dbe-5c9a-b2d5-48fcbeb80007", 00:13:32.790 "is_configured": true, 00:13:32.790 "data_offset": 0, 00:13:32.790 "data_size": 65536 00:13:32.790 } 00:13:32.790 ] 00:13:32.790 }' 00:13:32.790 11:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.790 11:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:32.790 11:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.790 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:32.790 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:32.790 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:32.790 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.790 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:32.790 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:32.790 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.790 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.790 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.790 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.790 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.790 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.790 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.791 "name": "raid_bdev1", 00:13:32.791 "uuid": "20dd19db-4d33-4d93-98c6-b0cb7c63dcf5", 00:13:32.791 "strip_size_kb": 0, 00:13:32.791 "state": "online", 00:13:32.791 "raid_level": "raid1", 00:13:32.791 "superblock": false, 00:13:32.791 "num_base_bdevs": 4, 00:13:32.791 "num_base_bdevs_discovered": 3, 00:13:32.791 "num_base_bdevs_operational": 3, 00:13:32.791 "base_bdevs_list": [ 00:13:32.791 { 00:13:32.791 "name": "spare", 00:13:32.791 "uuid": "29ce630f-1808-5363-b218-19c7aea1e37d", 00:13:32.791 "is_configured": true, 00:13:32.791 "data_offset": 0, 00:13:32.791 "data_size": 65536 00:13:32.791 }, 00:13:32.791 { 00:13:32.791 "name": null, 00:13:32.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.791 "is_configured": false, 00:13:32.791 "data_offset": 0, 00:13:32.791 "data_size": 65536 00:13:32.791 }, 00:13:32.791 { 00:13:32.791 "name": "BaseBdev3", 00:13:32.791 "uuid": "b957fe3e-d827-59f7-b8ba-4f35f1951436", 00:13:32.791 "is_configured": true, 00:13:32.791 "data_offset": 0, 00:13:32.791 "data_size": 65536 00:13:32.791 }, 00:13:32.791 { 00:13:32.791 "name": "BaseBdev4", 00:13:32.791 "uuid": "f97dd386-1dbe-5c9a-b2d5-48fcbeb80007", 00:13:32.791 "is_configured": true, 00:13:32.791 "data_offset": 0, 00:13:32.791 "data_size": 65536 00:13:32.791 } 00:13:32.791 ] 00:13:32.791 }' 00:13:32.791 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.791 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:32.791 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.051 "name": "raid_bdev1", 00:13:33.051 "uuid": "20dd19db-4d33-4d93-98c6-b0cb7c63dcf5", 00:13:33.051 "strip_size_kb": 0, 00:13:33.051 "state": "online", 00:13:33.051 "raid_level": "raid1", 00:13:33.051 "superblock": false, 00:13:33.051 "num_base_bdevs": 4, 00:13:33.051 "num_base_bdevs_discovered": 3, 00:13:33.051 "num_base_bdevs_operational": 3, 00:13:33.051 "base_bdevs_list": [ 00:13:33.051 { 00:13:33.051 "name": "spare", 00:13:33.051 "uuid": "29ce630f-1808-5363-b218-19c7aea1e37d", 00:13:33.051 "is_configured": true, 00:13:33.051 "data_offset": 0, 00:13:33.051 "data_size": 65536 00:13:33.051 }, 00:13:33.051 { 00:13:33.051 "name": null, 00:13:33.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.051 "is_configured": false, 00:13:33.051 "data_offset": 0, 00:13:33.051 "data_size": 65536 00:13:33.051 }, 00:13:33.051 { 00:13:33.051 "name": "BaseBdev3", 00:13:33.051 "uuid": "b957fe3e-d827-59f7-b8ba-4f35f1951436", 00:13:33.051 "is_configured": true, 00:13:33.051 "data_offset": 0, 00:13:33.051 "data_size": 65536 00:13:33.051 }, 00:13:33.051 { 00:13:33.051 "name": "BaseBdev4", 00:13:33.051 "uuid": "f97dd386-1dbe-5c9a-b2d5-48fcbeb80007", 00:13:33.051 "is_configured": true, 00:13:33.051 "data_offset": 0, 00:13:33.051 "data_size": 65536 00:13:33.051 } 00:13:33.051 ] 00:13:33.051 }' 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.051 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.313 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:33.313 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.313 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.313 [2024-11-27 11:51:59.695964] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:33.313 [2024-11-27 11:51:59.696042] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:33.313 [2024-11-27 11:51:59.696151] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.313 [2024-11-27 11:51:59.696253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:33.313 [2024-11-27 11:51:59.696300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:33.571 11:51:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:33.828 /dev/nbd0 00:13:33.828 11:51:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:33.828 11:51:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:33.828 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:33.828 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:33.828 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:33.828 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:33.828 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:33.828 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:33.828 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:33.828 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:33.828 11:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.828 1+0 records in 00:13:33.828 1+0 records out 00:13:33.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363012 s, 11.3 MB/s 00:13:33.828 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.828 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:33.828 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.828 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:33.828 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:33.828 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.828 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:33.828 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:34.086 /dev/nbd1 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.086 1+0 records in 00:13:34.086 1+0 records out 00:13:34.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384858 s, 10.6 MB/s 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.086 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:34.344 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:34.344 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:34.344 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:34.344 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.344 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.344 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:34.344 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:34.344 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.344 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.344 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77557 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77557 ']' 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77557 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77557 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:34.602 killing process with pid 77557 00:13:34.602 Received shutdown signal, test time was about 60.000000 seconds 00:13:34.602 00:13:34.602 Latency(us) 00:13:34.602 [2024-11-27T11:52:00.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.602 [2024-11-27T11:52:00.987Z] =================================================================================================================== 00:13:34.602 [2024-11-27T11:52:00.987Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77557' 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77557 00:13:34.602 [2024-11-27 11:52:00.912568] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:34.602 11:52:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77557 00:13:35.167 [2024-11-27 11:52:01.402304] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:36.540 00:13:36.540 real 0m17.415s 00:13:36.540 user 0m19.754s 00:13:36.540 sys 0m3.130s 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.540 ************************************ 00:13:36.540 END TEST raid_rebuild_test 00:13:36.540 ************************************ 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.540 11:52:02 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:36.540 11:52:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:36.540 11:52:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.540 11:52:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:36.540 ************************************ 00:13:36.540 START TEST raid_rebuild_test_sb 00:13:36.540 ************************************ 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77999 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77999 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77999 ']' 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:36.540 11:52:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.540 [2024-11-27 11:52:02.691412] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:13:36.540 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:36.540 Zero copy mechanism will not be used. 00:13:36.540 [2024-11-27 11:52:02.691984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77999 ] 00:13:36.540 [2024-11-27 11:52:02.843240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.799 [2024-11-27 11:52:02.959988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.799 [2024-11-27 11:52:03.152347] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:36.799 [2024-11-27 11:52:03.152490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.364 BaseBdev1_malloc 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.364 [2024-11-27 11:52:03.595591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:37.364 [2024-11-27 11:52:03.595702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.364 [2024-11-27 11:52:03.595742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:37.364 [2024-11-27 11:52:03.595774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.364 [2024-11-27 11:52:03.597821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.364 [2024-11-27 11:52:03.597906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:37.364 BaseBdev1 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.364 BaseBdev2_malloc 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.364 [2024-11-27 11:52:03.651600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:37.364 [2024-11-27 11:52:03.651724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.364 [2024-11-27 11:52:03.651770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:37.364 [2024-11-27 11:52:03.651817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.364 [2024-11-27 11:52:03.654053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.364 [2024-11-27 11:52:03.654124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:37.364 BaseBdev2 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.364 BaseBdev3_malloc 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.364 [2024-11-27 11:52:03.721087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:37.364 [2024-11-27 11:52:03.721189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.364 [2024-11-27 11:52:03.721227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:37.364 [2024-11-27 11:52:03.721256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.364 [2024-11-27 11:52:03.723326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.364 [2024-11-27 11:52:03.723398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:37.364 BaseBdev3 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.364 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.621 BaseBdev4_malloc 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.621 [2024-11-27 11:52:03.775347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:37.621 [2024-11-27 11:52:03.775499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.621 [2024-11-27 11:52:03.775557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:37.621 [2024-11-27 11:52:03.775594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.621 [2024-11-27 11:52:03.777780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.621 [2024-11-27 11:52:03.777879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:37.621 BaseBdev4 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.621 spare_malloc 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.621 spare_delay 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.621 [2024-11-27 11:52:03.856773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:37.621 [2024-11-27 11:52:03.856900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.621 [2024-11-27 11:52:03.856967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:37.621 [2024-11-27 11:52:03.857003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.621 [2024-11-27 11:52:03.859200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.621 [2024-11-27 11:52:03.859270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:37.621 spare 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.621 [2024-11-27 11:52:03.864813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:37.621 [2024-11-27 11:52:03.866610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:37.621 [2024-11-27 11:52:03.866708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:37.621 [2024-11-27 11:52:03.866796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:37.621 [2024-11-27 11:52:03.867025] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:37.621 [2024-11-27 11:52:03.867075] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:37.621 [2024-11-27 11:52:03.867353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:37.621 [2024-11-27 11:52:03.867574] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:37.621 [2024-11-27 11:52:03.867618] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:37.621 [2024-11-27 11:52:03.867818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.621 "name": "raid_bdev1", 00:13:37.621 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:37.621 "strip_size_kb": 0, 00:13:37.621 "state": "online", 00:13:37.621 "raid_level": "raid1", 00:13:37.621 "superblock": true, 00:13:37.621 "num_base_bdevs": 4, 00:13:37.621 "num_base_bdevs_discovered": 4, 00:13:37.621 "num_base_bdevs_operational": 4, 00:13:37.621 "base_bdevs_list": [ 00:13:37.621 { 00:13:37.621 "name": "BaseBdev1", 00:13:37.621 "uuid": "603a820b-5b17-54f9-96fc-ee4844a5784d", 00:13:37.621 "is_configured": true, 00:13:37.621 "data_offset": 2048, 00:13:37.621 "data_size": 63488 00:13:37.621 }, 00:13:37.621 { 00:13:37.621 "name": "BaseBdev2", 00:13:37.621 "uuid": "e1d7c23b-ed80-51fc-9ef0-ded93f335a31", 00:13:37.621 "is_configured": true, 00:13:37.621 "data_offset": 2048, 00:13:37.621 "data_size": 63488 00:13:37.621 }, 00:13:37.621 { 00:13:37.621 "name": "BaseBdev3", 00:13:37.621 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:37.621 "is_configured": true, 00:13:37.621 "data_offset": 2048, 00:13:37.621 "data_size": 63488 00:13:37.621 }, 00:13:37.621 { 00:13:37.621 "name": "BaseBdev4", 00:13:37.621 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:37.621 "is_configured": true, 00:13:37.621 "data_offset": 2048, 00:13:37.621 "data_size": 63488 00:13:37.621 } 00:13:37.621 ] 00:13:37.621 }' 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.621 11:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.186 [2024-11-27 11:52:04.324418] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:38.186 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:38.444 [2024-11-27 11:52:04.631687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:38.444 /dev/nbd0 00:13:38.444 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:38.444 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:38.444 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:38.444 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:38.444 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:38.444 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:38.444 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:38.444 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:38.444 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:38.444 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:38.444 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.444 1+0 records in 00:13:38.444 1+0 records out 00:13:38.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362239 s, 11.3 MB/s 00:13:38.444 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.444 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:38.444 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.444 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:38.444 11:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:38.444 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.444 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:38.445 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:38.445 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:38.445 11:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:43.716 63488+0 records in 00:13:43.716 63488+0 records out 00:13:43.716 32505856 bytes (33 MB, 31 MiB) copied, 5.25941 s, 6.2 MB/s 00:13:43.716 11:52:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:43.716 11:52:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.716 11:52:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:43.716 11:52:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:43.716 11:52:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:43.716 11:52:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.716 11:52:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:43.977 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:43.977 [2024-11-27 11:52:10.217924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.977 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:43.977 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:43.977 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.977 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.977 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:43.977 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.978 [2024-11-27 11:52:10.235249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.978 "name": "raid_bdev1", 00:13:43.978 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:43.978 "strip_size_kb": 0, 00:13:43.978 "state": "online", 00:13:43.978 "raid_level": "raid1", 00:13:43.978 "superblock": true, 00:13:43.978 "num_base_bdevs": 4, 00:13:43.978 "num_base_bdevs_discovered": 3, 00:13:43.978 "num_base_bdevs_operational": 3, 00:13:43.978 "base_bdevs_list": [ 00:13:43.978 { 00:13:43.978 "name": null, 00:13:43.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.978 "is_configured": false, 00:13:43.978 "data_offset": 0, 00:13:43.978 "data_size": 63488 00:13:43.978 }, 00:13:43.978 { 00:13:43.978 "name": "BaseBdev2", 00:13:43.978 "uuid": "e1d7c23b-ed80-51fc-9ef0-ded93f335a31", 00:13:43.978 "is_configured": true, 00:13:43.978 "data_offset": 2048, 00:13:43.978 "data_size": 63488 00:13:43.978 }, 00:13:43.978 { 00:13:43.978 "name": "BaseBdev3", 00:13:43.978 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:43.978 "is_configured": true, 00:13:43.978 "data_offset": 2048, 00:13:43.978 "data_size": 63488 00:13:43.978 }, 00:13:43.978 { 00:13:43.978 "name": "BaseBdev4", 00:13:43.978 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:43.978 "is_configured": true, 00:13:43.978 "data_offset": 2048, 00:13:43.978 "data_size": 63488 00:13:43.978 } 00:13:43.978 ] 00:13:43.978 }' 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.978 11:52:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.546 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:44.546 11:52:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.546 11:52:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.546 [2024-11-27 11:52:10.666527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.546 [2024-11-27 11:52:10.683057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:44.546 11:52:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.546 11:52:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:44.546 [2024-11-27 11:52:10.685040] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:45.487 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.487 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.487 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.487 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.487 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.487 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.487 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.487 11:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.487 11:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.487 11:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.487 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.487 "name": "raid_bdev1", 00:13:45.487 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:45.487 "strip_size_kb": 0, 00:13:45.487 "state": "online", 00:13:45.487 "raid_level": "raid1", 00:13:45.487 "superblock": true, 00:13:45.487 "num_base_bdevs": 4, 00:13:45.487 "num_base_bdevs_discovered": 4, 00:13:45.487 "num_base_bdevs_operational": 4, 00:13:45.487 "process": { 00:13:45.487 "type": "rebuild", 00:13:45.487 "target": "spare", 00:13:45.487 "progress": { 00:13:45.487 "blocks": 20480, 00:13:45.487 "percent": 32 00:13:45.487 } 00:13:45.487 }, 00:13:45.487 "base_bdevs_list": [ 00:13:45.487 { 00:13:45.487 "name": "spare", 00:13:45.487 "uuid": "642a0e73-5ebd-59cf-b6a1-fe8f9d552edf", 00:13:45.487 "is_configured": true, 00:13:45.487 "data_offset": 2048, 00:13:45.487 "data_size": 63488 00:13:45.487 }, 00:13:45.487 { 00:13:45.487 "name": "BaseBdev2", 00:13:45.487 "uuid": "e1d7c23b-ed80-51fc-9ef0-ded93f335a31", 00:13:45.487 "is_configured": true, 00:13:45.487 "data_offset": 2048, 00:13:45.487 "data_size": 63488 00:13:45.487 }, 00:13:45.487 { 00:13:45.487 "name": "BaseBdev3", 00:13:45.487 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:45.487 "is_configured": true, 00:13:45.487 "data_offset": 2048, 00:13:45.487 "data_size": 63488 00:13:45.487 }, 00:13:45.487 { 00:13:45.487 "name": "BaseBdev4", 00:13:45.487 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:45.487 "is_configured": true, 00:13:45.487 "data_offset": 2048, 00:13:45.487 "data_size": 63488 00:13:45.487 } 00:13:45.487 ] 00:13:45.487 }' 00:13:45.487 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.487 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.487 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.487 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.487 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:45.487 11:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.487 11:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.487 [2024-11-27 11:52:11.808468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.747 [2024-11-27 11:52:11.890613] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:45.747 [2024-11-27 11:52:11.890704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.747 [2024-11-27 11:52:11.890721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.747 [2024-11-27 11:52:11.890731] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:45.747 11:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.747 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:45.747 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.747 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.747 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.747 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.747 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.747 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.747 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.747 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.747 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.747 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.747 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.747 11:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.747 11:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.747 11:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.747 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.747 "name": "raid_bdev1", 00:13:45.747 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:45.747 "strip_size_kb": 0, 00:13:45.747 "state": "online", 00:13:45.747 "raid_level": "raid1", 00:13:45.747 "superblock": true, 00:13:45.747 "num_base_bdevs": 4, 00:13:45.747 "num_base_bdevs_discovered": 3, 00:13:45.747 "num_base_bdevs_operational": 3, 00:13:45.747 "base_bdevs_list": [ 00:13:45.747 { 00:13:45.747 "name": null, 00:13:45.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.747 "is_configured": false, 00:13:45.747 "data_offset": 0, 00:13:45.747 "data_size": 63488 00:13:45.747 }, 00:13:45.748 { 00:13:45.748 "name": "BaseBdev2", 00:13:45.748 "uuid": "e1d7c23b-ed80-51fc-9ef0-ded93f335a31", 00:13:45.748 "is_configured": true, 00:13:45.748 "data_offset": 2048, 00:13:45.748 "data_size": 63488 00:13:45.748 }, 00:13:45.748 { 00:13:45.748 "name": "BaseBdev3", 00:13:45.748 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:45.748 "is_configured": true, 00:13:45.748 "data_offset": 2048, 00:13:45.748 "data_size": 63488 00:13:45.748 }, 00:13:45.748 { 00:13:45.748 "name": "BaseBdev4", 00:13:45.748 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:45.748 "is_configured": true, 00:13:45.748 "data_offset": 2048, 00:13:45.748 "data_size": 63488 00:13:45.748 } 00:13:45.748 ] 00:13:45.748 }' 00:13:45.748 11:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.748 11:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.047 11:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:46.047 11:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.047 11:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:46.047 11:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:46.047 11:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.047 11:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.047 11:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.047 11:52:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.047 11:52:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.047 11:52:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.306 11:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.306 "name": "raid_bdev1", 00:13:46.306 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:46.306 "strip_size_kb": 0, 00:13:46.306 "state": "online", 00:13:46.306 "raid_level": "raid1", 00:13:46.306 "superblock": true, 00:13:46.306 "num_base_bdevs": 4, 00:13:46.306 "num_base_bdevs_discovered": 3, 00:13:46.306 "num_base_bdevs_operational": 3, 00:13:46.306 "base_bdevs_list": [ 00:13:46.306 { 00:13:46.306 "name": null, 00:13:46.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.306 "is_configured": false, 00:13:46.306 "data_offset": 0, 00:13:46.306 "data_size": 63488 00:13:46.306 }, 00:13:46.306 { 00:13:46.306 "name": "BaseBdev2", 00:13:46.306 "uuid": "e1d7c23b-ed80-51fc-9ef0-ded93f335a31", 00:13:46.306 "is_configured": true, 00:13:46.306 "data_offset": 2048, 00:13:46.306 "data_size": 63488 00:13:46.306 }, 00:13:46.306 { 00:13:46.306 "name": "BaseBdev3", 00:13:46.306 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:46.306 "is_configured": true, 00:13:46.306 "data_offset": 2048, 00:13:46.306 "data_size": 63488 00:13:46.306 }, 00:13:46.306 { 00:13:46.306 "name": "BaseBdev4", 00:13:46.306 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:46.306 "is_configured": true, 00:13:46.306 "data_offset": 2048, 00:13:46.306 "data_size": 63488 00:13:46.306 } 00:13:46.306 ] 00:13:46.306 }' 00:13:46.306 11:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.306 11:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:46.306 11:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.306 11:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:46.306 11:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:46.306 11:52:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.306 11:52:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.306 [2024-11-27 11:52:12.544949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.306 [2024-11-27 11:52:12.561379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:46.306 11:52:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.306 11:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:46.306 [2024-11-27 11:52:12.563366] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:47.262 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.262 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.262 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.262 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.262 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.262 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.262 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.262 11:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.262 11:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.262 11:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.262 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.262 "name": "raid_bdev1", 00:13:47.262 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:47.262 "strip_size_kb": 0, 00:13:47.262 "state": "online", 00:13:47.262 "raid_level": "raid1", 00:13:47.262 "superblock": true, 00:13:47.262 "num_base_bdevs": 4, 00:13:47.262 "num_base_bdevs_discovered": 4, 00:13:47.262 "num_base_bdevs_operational": 4, 00:13:47.262 "process": { 00:13:47.262 "type": "rebuild", 00:13:47.262 "target": "spare", 00:13:47.262 "progress": { 00:13:47.262 "blocks": 20480, 00:13:47.262 "percent": 32 00:13:47.262 } 00:13:47.262 }, 00:13:47.262 "base_bdevs_list": [ 00:13:47.262 { 00:13:47.262 "name": "spare", 00:13:47.262 "uuid": "642a0e73-5ebd-59cf-b6a1-fe8f9d552edf", 00:13:47.262 "is_configured": true, 00:13:47.262 "data_offset": 2048, 00:13:47.262 "data_size": 63488 00:13:47.262 }, 00:13:47.262 { 00:13:47.262 "name": "BaseBdev2", 00:13:47.262 "uuid": "e1d7c23b-ed80-51fc-9ef0-ded93f335a31", 00:13:47.262 "is_configured": true, 00:13:47.262 "data_offset": 2048, 00:13:47.262 "data_size": 63488 00:13:47.262 }, 00:13:47.262 { 00:13:47.262 "name": "BaseBdev3", 00:13:47.262 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:47.262 "is_configured": true, 00:13:47.262 "data_offset": 2048, 00:13:47.262 "data_size": 63488 00:13:47.262 }, 00:13:47.262 { 00:13:47.262 "name": "BaseBdev4", 00:13:47.262 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:47.262 "is_configured": true, 00:13:47.262 "data_offset": 2048, 00:13:47.262 "data_size": 63488 00:13:47.262 } 00:13:47.262 ] 00:13:47.262 }' 00:13:47.262 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:47.521 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.521 [2024-11-27 11:52:13.731565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:47.521 [2024-11-27 11:52:13.869120] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.521 11:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.781 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.781 "name": "raid_bdev1", 00:13:47.781 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:47.781 "strip_size_kb": 0, 00:13:47.781 "state": "online", 00:13:47.781 "raid_level": "raid1", 00:13:47.781 "superblock": true, 00:13:47.781 "num_base_bdevs": 4, 00:13:47.781 "num_base_bdevs_discovered": 3, 00:13:47.781 "num_base_bdevs_operational": 3, 00:13:47.781 "process": { 00:13:47.781 "type": "rebuild", 00:13:47.781 "target": "spare", 00:13:47.781 "progress": { 00:13:47.781 "blocks": 24576, 00:13:47.781 "percent": 38 00:13:47.781 } 00:13:47.781 }, 00:13:47.781 "base_bdevs_list": [ 00:13:47.781 { 00:13:47.781 "name": "spare", 00:13:47.781 "uuid": "642a0e73-5ebd-59cf-b6a1-fe8f9d552edf", 00:13:47.781 "is_configured": true, 00:13:47.781 "data_offset": 2048, 00:13:47.781 "data_size": 63488 00:13:47.781 }, 00:13:47.781 { 00:13:47.781 "name": null, 00:13:47.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.781 "is_configured": false, 00:13:47.781 "data_offset": 0, 00:13:47.781 "data_size": 63488 00:13:47.781 }, 00:13:47.781 { 00:13:47.781 "name": "BaseBdev3", 00:13:47.781 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:47.781 "is_configured": true, 00:13:47.781 "data_offset": 2048, 00:13:47.781 "data_size": 63488 00:13:47.781 }, 00:13:47.781 { 00:13:47.781 "name": "BaseBdev4", 00:13:47.781 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:47.781 "is_configured": true, 00:13:47.781 "data_offset": 2048, 00:13:47.781 "data_size": 63488 00:13:47.781 } 00:13:47.781 ] 00:13:47.781 }' 00:13:47.781 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.781 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.781 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.781 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.781 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=468 00:13:47.781 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:47.781 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.781 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.781 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.781 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.781 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.781 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.781 11:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.781 11:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.781 11:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.781 11:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.781 11:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.781 "name": "raid_bdev1", 00:13:47.781 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:47.781 "strip_size_kb": 0, 00:13:47.781 "state": "online", 00:13:47.781 "raid_level": "raid1", 00:13:47.781 "superblock": true, 00:13:47.781 "num_base_bdevs": 4, 00:13:47.781 "num_base_bdevs_discovered": 3, 00:13:47.781 "num_base_bdevs_operational": 3, 00:13:47.781 "process": { 00:13:47.781 "type": "rebuild", 00:13:47.781 "target": "spare", 00:13:47.781 "progress": { 00:13:47.781 "blocks": 26624, 00:13:47.781 "percent": 41 00:13:47.781 } 00:13:47.781 }, 00:13:47.781 "base_bdevs_list": [ 00:13:47.781 { 00:13:47.781 "name": "spare", 00:13:47.781 "uuid": "642a0e73-5ebd-59cf-b6a1-fe8f9d552edf", 00:13:47.781 "is_configured": true, 00:13:47.781 "data_offset": 2048, 00:13:47.781 "data_size": 63488 00:13:47.781 }, 00:13:47.781 { 00:13:47.781 "name": null, 00:13:47.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.781 "is_configured": false, 00:13:47.781 "data_offset": 0, 00:13:47.781 "data_size": 63488 00:13:47.781 }, 00:13:47.781 { 00:13:47.781 "name": "BaseBdev3", 00:13:47.781 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:47.781 "is_configured": true, 00:13:47.781 "data_offset": 2048, 00:13:47.781 "data_size": 63488 00:13:47.781 }, 00:13:47.781 { 00:13:47.781 "name": "BaseBdev4", 00:13:47.781 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:47.781 "is_configured": true, 00:13:47.781 "data_offset": 2048, 00:13:47.781 "data_size": 63488 00:13:47.781 } 00:13:47.781 ] 00:13:47.781 }' 00:13:47.781 11:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.781 11:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.781 11:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.781 11:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.781 11:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:49.161 11:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:49.161 11:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.161 11:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.161 11:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.161 11:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.161 11:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.161 11:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.161 11:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.161 11:52:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.161 11:52:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.161 11:52:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.161 11:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.161 "name": "raid_bdev1", 00:13:49.161 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:49.161 "strip_size_kb": 0, 00:13:49.161 "state": "online", 00:13:49.161 "raid_level": "raid1", 00:13:49.161 "superblock": true, 00:13:49.161 "num_base_bdevs": 4, 00:13:49.161 "num_base_bdevs_discovered": 3, 00:13:49.161 "num_base_bdevs_operational": 3, 00:13:49.161 "process": { 00:13:49.161 "type": "rebuild", 00:13:49.161 "target": "spare", 00:13:49.161 "progress": { 00:13:49.161 "blocks": 49152, 00:13:49.161 "percent": 77 00:13:49.161 } 00:13:49.161 }, 00:13:49.161 "base_bdevs_list": [ 00:13:49.161 { 00:13:49.161 "name": "spare", 00:13:49.161 "uuid": "642a0e73-5ebd-59cf-b6a1-fe8f9d552edf", 00:13:49.161 "is_configured": true, 00:13:49.161 "data_offset": 2048, 00:13:49.161 "data_size": 63488 00:13:49.161 }, 00:13:49.161 { 00:13:49.161 "name": null, 00:13:49.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.161 "is_configured": false, 00:13:49.161 "data_offset": 0, 00:13:49.161 "data_size": 63488 00:13:49.161 }, 00:13:49.161 { 00:13:49.161 "name": "BaseBdev3", 00:13:49.161 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:49.161 "is_configured": true, 00:13:49.161 "data_offset": 2048, 00:13:49.161 "data_size": 63488 00:13:49.161 }, 00:13:49.161 { 00:13:49.161 "name": "BaseBdev4", 00:13:49.161 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:49.161 "is_configured": true, 00:13:49.161 "data_offset": 2048, 00:13:49.161 "data_size": 63488 00:13:49.161 } 00:13:49.161 ] 00:13:49.161 }' 00:13:49.161 11:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.161 11:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.161 11:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.161 11:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.161 11:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:49.420 [2024-11-27 11:52:15.778193] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:49.420 [2024-11-27 11:52:15.778277] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:49.420 [2024-11-27 11:52:15.778422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.989 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:49.989 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.989 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.989 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.989 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.989 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.989 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.989 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.989 11:52:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.989 11:52:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.989 11:52:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.989 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.989 "name": "raid_bdev1", 00:13:49.989 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:49.989 "strip_size_kb": 0, 00:13:49.989 "state": "online", 00:13:49.989 "raid_level": "raid1", 00:13:49.989 "superblock": true, 00:13:49.989 "num_base_bdevs": 4, 00:13:49.989 "num_base_bdevs_discovered": 3, 00:13:49.989 "num_base_bdevs_operational": 3, 00:13:49.989 "base_bdevs_list": [ 00:13:49.989 { 00:13:49.989 "name": "spare", 00:13:49.989 "uuid": "642a0e73-5ebd-59cf-b6a1-fe8f9d552edf", 00:13:49.989 "is_configured": true, 00:13:49.989 "data_offset": 2048, 00:13:49.989 "data_size": 63488 00:13:49.989 }, 00:13:49.989 { 00:13:49.989 "name": null, 00:13:49.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.989 "is_configured": false, 00:13:49.989 "data_offset": 0, 00:13:49.989 "data_size": 63488 00:13:49.989 }, 00:13:49.989 { 00:13:49.989 "name": "BaseBdev3", 00:13:49.989 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:49.989 "is_configured": true, 00:13:49.989 "data_offset": 2048, 00:13:49.989 "data_size": 63488 00:13:49.989 }, 00:13:49.989 { 00:13:49.989 "name": "BaseBdev4", 00:13:49.989 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:49.989 "is_configured": true, 00:13:49.989 "data_offset": 2048, 00:13:49.989 "data_size": 63488 00:13:49.989 } 00:13:49.989 ] 00:13:49.989 }' 00:13:49.989 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.989 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:49.989 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.248 "name": "raid_bdev1", 00:13:50.248 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:50.248 "strip_size_kb": 0, 00:13:50.248 "state": "online", 00:13:50.248 "raid_level": "raid1", 00:13:50.248 "superblock": true, 00:13:50.248 "num_base_bdevs": 4, 00:13:50.248 "num_base_bdevs_discovered": 3, 00:13:50.248 "num_base_bdevs_operational": 3, 00:13:50.248 "base_bdevs_list": [ 00:13:50.248 { 00:13:50.248 "name": "spare", 00:13:50.248 "uuid": "642a0e73-5ebd-59cf-b6a1-fe8f9d552edf", 00:13:50.248 "is_configured": true, 00:13:50.248 "data_offset": 2048, 00:13:50.248 "data_size": 63488 00:13:50.248 }, 00:13:50.248 { 00:13:50.248 "name": null, 00:13:50.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.248 "is_configured": false, 00:13:50.248 "data_offset": 0, 00:13:50.248 "data_size": 63488 00:13:50.248 }, 00:13:50.248 { 00:13:50.248 "name": "BaseBdev3", 00:13:50.248 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:50.248 "is_configured": true, 00:13:50.248 "data_offset": 2048, 00:13:50.248 "data_size": 63488 00:13:50.248 }, 00:13:50.248 { 00:13:50.248 "name": "BaseBdev4", 00:13:50.248 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:50.248 "is_configured": true, 00:13:50.248 "data_offset": 2048, 00:13:50.248 "data_size": 63488 00:13:50.248 } 00:13:50.248 ] 00:13:50.248 }' 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.248 "name": "raid_bdev1", 00:13:50.248 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:50.248 "strip_size_kb": 0, 00:13:50.248 "state": "online", 00:13:50.248 "raid_level": "raid1", 00:13:50.248 "superblock": true, 00:13:50.248 "num_base_bdevs": 4, 00:13:50.248 "num_base_bdevs_discovered": 3, 00:13:50.248 "num_base_bdevs_operational": 3, 00:13:50.248 "base_bdevs_list": [ 00:13:50.248 { 00:13:50.248 "name": "spare", 00:13:50.248 "uuid": "642a0e73-5ebd-59cf-b6a1-fe8f9d552edf", 00:13:50.248 "is_configured": true, 00:13:50.248 "data_offset": 2048, 00:13:50.248 "data_size": 63488 00:13:50.248 }, 00:13:50.248 { 00:13:50.248 "name": null, 00:13:50.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.248 "is_configured": false, 00:13:50.248 "data_offset": 0, 00:13:50.248 "data_size": 63488 00:13:50.248 }, 00:13:50.248 { 00:13:50.248 "name": "BaseBdev3", 00:13:50.248 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:50.248 "is_configured": true, 00:13:50.248 "data_offset": 2048, 00:13:50.248 "data_size": 63488 00:13:50.248 }, 00:13:50.248 { 00:13:50.248 "name": "BaseBdev4", 00:13:50.248 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:50.248 "is_configured": true, 00:13:50.248 "data_offset": 2048, 00:13:50.248 "data_size": 63488 00:13:50.248 } 00:13:50.248 ] 00:13:50.248 }' 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.248 11:52:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.817 11:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:50.817 11:52:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.817 11:52:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.817 [2024-11-27 11:52:17.002331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:50.817 [2024-11-27 11:52:17.002437] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.817 [2024-11-27 11:52:17.002534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.817 [2024-11-27 11:52:17.002616] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.817 [2024-11-27 11:52:17.002627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:50.817 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:51.077 /dev/nbd0 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.077 1+0 records in 00:13:51.077 1+0 records out 00:13:51.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272219 s, 15.0 MB/s 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:51.077 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:51.337 /dev/nbd1 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.337 1+0 records in 00:13:51.337 1+0 records out 00:13:51.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294079 s, 13.9 MB/s 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:51.337 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:51.597 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:51.598 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:51.598 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:51.598 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:51.598 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:51.598 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:51.598 11:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:51.857 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:51.857 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:51.857 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:51.857 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:51.857 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:51.857 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:51.857 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:51.857 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:51.857 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:51.857 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:52.116 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:52.116 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:52.116 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:52.116 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:52.116 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:52.116 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:52.116 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:52.116 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:52.116 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:52.116 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:52.116 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.116 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.116 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.116 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:52.116 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.116 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.116 [2024-11-27 11:52:18.300509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:52.116 [2024-11-27 11:52:18.300571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.116 [2024-11-27 11:52:18.300596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:52.116 [2024-11-27 11:52:18.300605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.116 [2024-11-27 11:52:18.302855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.116 [2024-11-27 11:52:18.302891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:52.116 [2024-11-27 11:52:18.302988] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:52.116 [2024-11-27 11:52:18.303047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:52.116 [2024-11-27 11:52:18.303199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.116 [2024-11-27 11:52:18.303287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:52.116 spare 00:13:52.116 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.116 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.117 [2024-11-27 11:52:18.403195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:52.117 [2024-11-27 11:52:18.403238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:52.117 [2024-11-27 11:52:18.403621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:52.117 [2024-11-27 11:52:18.403873] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:52.117 [2024-11-27 11:52:18.403895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:52.117 [2024-11-27 11:52:18.404091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.117 "name": "raid_bdev1", 00:13:52.117 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:52.117 "strip_size_kb": 0, 00:13:52.117 "state": "online", 00:13:52.117 "raid_level": "raid1", 00:13:52.117 "superblock": true, 00:13:52.117 "num_base_bdevs": 4, 00:13:52.117 "num_base_bdevs_discovered": 3, 00:13:52.117 "num_base_bdevs_operational": 3, 00:13:52.117 "base_bdevs_list": [ 00:13:52.117 { 00:13:52.117 "name": "spare", 00:13:52.117 "uuid": "642a0e73-5ebd-59cf-b6a1-fe8f9d552edf", 00:13:52.117 "is_configured": true, 00:13:52.117 "data_offset": 2048, 00:13:52.117 "data_size": 63488 00:13:52.117 }, 00:13:52.117 { 00:13:52.117 "name": null, 00:13:52.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.117 "is_configured": false, 00:13:52.117 "data_offset": 2048, 00:13:52.117 "data_size": 63488 00:13:52.117 }, 00:13:52.117 { 00:13:52.117 "name": "BaseBdev3", 00:13:52.117 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:52.117 "is_configured": true, 00:13:52.117 "data_offset": 2048, 00:13:52.117 "data_size": 63488 00:13:52.117 }, 00:13:52.117 { 00:13:52.117 "name": "BaseBdev4", 00:13:52.117 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:52.117 "is_configured": true, 00:13:52.117 "data_offset": 2048, 00:13:52.117 "data_size": 63488 00:13:52.117 } 00:13:52.117 ] 00:13:52.117 }' 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.117 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.686 "name": "raid_bdev1", 00:13:52.686 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:52.686 "strip_size_kb": 0, 00:13:52.686 "state": "online", 00:13:52.686 "raid_level": "raid1", 00:13:52.686 "superblock": true, 00:13:52.686 "num_base_bdevs": 4, 00:13:52.686 "num_base_bdevs_discovered": 3, 00:13:52.686 "num_base_bdevs_operational": 3, 00:13:52.686 "base_bdevs_list": [ 00:13:52.686 { 00:13:52.686 "name": "spare", 00:13:52.686 "uuid": "642a0e73-5ebd-59cf-b6a1-fe8f9d552edf", 00:13:52.686 "is_configured": true, 00:13:52.686 "data_offset": 2048, 00:13:52.686 "data_size": 63488 00:13:52.686 }, 00:13:52.686 { 00:13:52.686 "name": null, 00:13:52.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.686 "is_configured": false, 00:13:52.686 "data_offset": 2048, 00:13:52.686 "data_size": 63488 00:13:52.686 }, 00:13:52.686 { 00:13:52.686 "name": "BaseBdev3", 00:13:52.686 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:52.686 "is_configured": true, 00:13:52.686 "data_offset": 2048, 00:13:52.686 "data_size": 63488 00:13:52.686 }, 00:13:52.686 { 00:13:52.686 "name": "BaseBdev4", 00:13:52.686 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:52.686 "is_configured": true, 00:13:52.686 "data_offset": 2048, 00:13:52.686 "data_size": 63488 00:13:52.686 } 00:13:52.686 ] 00:13:52.686 }' 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:52.686 11:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.686 [2024-11-27 11:52:19.031419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.686 11:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.957 11:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.957 "name": "raid_bdev1", 00:13:52.957 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:52.957 "strip_size_kb": 0, 00:13:52.957 "state": "online", 00:13:52.957 "raid_level": "raid1", 00:13:52.957 "superblock": true, 00:13:52.957 "num_base_bdevs": 4, 00:13:52.957 "num_base_bdevs_discovered": 2, 00:13:52.957 "num_base_bdevs_operational": 2, 00:13:52.957 "base_bdevs_list": [ 00:13:52.958 { 00:13:52.958 "name": null, 00:13:52.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.958 "is_configured": false, 00:13:52.958 "data_offset": 0, 00:13:52.958 "data_size": 63488 00:13:52.958 }, 00:13:52.958 { 00:13:52.958 "name": null, 00:13:52.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.958 "is_configured": false, 00:13:52.958 "data_offset": 2048, 00:13:52.958 "data_size": 63488 00:13:52.958 }, 00:13:52.958 { 00:13:52.958 "name": "BaseBdev3", 00:13:52.958 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:52.958 "is_configured": true, 00:13:52.958 "data_offset": 2048, 00:13:52.958 "data_size": 63488 00:13:52.958 }, 00:13:52.958 { 00:13:52.958 "name": "BaseBdev4", 00:13:52.958 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:52.958 "is_configured": true, 00:13:52.958 "data_offset": 2048, 00:13:52.958 "data_size": 63488 00:13:52.958 } 00:13:52.958 ] 00:13:52.958 }' 00:13:52.958 11:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.958 11:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.217 11:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:53.217 11:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.217 11:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.217 [2024-11-27 11:52:19.542579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:53.217 [2024-11-27 11:52:19.542805] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:53.217 [2024-11-27 11:52:19.542826] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:53.217 [2024-11-27 11:52:19.542874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:53.217 [2024-11-27 11:52:19.559193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:13:53.217 [2024-11-27 11:52:19.561241] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:53.217 11:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.217 11:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.594 "name": "raid_bdev1", 00:13:54.594 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:54.594 "strip_size_kb": 0, 00:13:54.594 "state": "online", 00:13:54.594 "raid_level": "raid1", 00:13:54.594 "superblock": true, 00:13:54.594 "num_base_bdevs": 4, 00:13:54.594 "num_base_bdevs_discovered": 3, 00:13:54.594 "num_base_bdevs_operational": 3, 00:13:54.594 "process": { 00:13:54.594 "type": "rebuild", 00:13:54.594 "target": "spare", 00:13:54.594 "progress": { 00:13:54.594 "blocks": 20480, 00:13:54.594 "percent": 32 00:13:54.594 } 00:13:54.594 }, 00:13:54.594 "base_bdevs_list": [ 00:13:54.594 { 00:13:54.594 "name": "spare", 00:13:54.594 "uuid": "642a0e73-5ebd-59cf-b6a1-fe8f9d552edf", 00:13:54.594 "is_configured": true, 00:13:54.594 "data_offset": 2048, 00:13:54.594 "data_size": 63488 00:13:54.594 }, 00:13:54.594 { 00:13:54.594 "name": null, 00:13:54.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.594 "is_configured": false, 00:13:54.594 "data_offset": 2048, 00:13:54.594 "data_size": 63488 00:13:54.594 }, 00:13:54.594 { 00:13:54.594 "name": "BaseBdev3", 00:13:54.594 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:54.594 "is_configured": true, 00:13:54.594 "data_offset": 2048, 00:13:54.594 "data_size": 63488 00:13:54.594 }, 00:13:54.594 { 00:13:54.594 "name": "BaseBdev4", 00:13:54.594 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:54.594 "is_configured": true, 00:13:54.594 "data_offset": 2048, 00:13:54.594 "data_size": 63488 00:13:54.594 } 00:13:54.594 ] 00:13:54.594 }' 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.594 [2024-11-27 11:52:20.708939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.594 [2024-11-27 11:52:20.767156] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:54.594 [2024-11-27 11:52:20.767221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.594 [2024-11-27 11:52:20.767240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.594 [2024-11-27 11:52:20.767247] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.594 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.594 "name": "raid_bdev1", 00:13:54.594 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:54.594 "strip_size_kb": 0, 00:13:54.594 "state": "online", 00:13:54.594 "raid_level": "raid1", 00:13:54.594 "superblock": true, 00:13:54.594 "num_base_bdevs": 4, 00:13:54.594 "num_base_bdevs_discovered": 2, 00:13:54.594 "num_base_bdevs_operational": 2, 00:13:54.594 "base_bdevs_list": [ 00:13:54.594 { 00:13:54.594 "name": null, 00:13:54.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.594 "is_configured": false, 00:13:54.594 "data_offset": 0, 00:13:54.594 "data_size": 63488 00:13:54.594 }, 00:13:54.594 { 00:13:54.594 "name": null, 00:13:54.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.594 "is_configured": false, 00:13:54.594 "data_offset": 2048, 00:13:54.594 "data_size": 63488 00:13:54.594 }, 00:13:54.594 { 00:13:54.594 "name": "BaseBdev3", 00:13:54.594 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:54.594 "is_configured": true, 00:13:54.594 "data_offset": 2048, 00:13:54.594 "data_size": 63488 00:13:54.594 }, 00:13:54.594 { 00:13:54.594 "name": "BaseBdev4", 00:13:54.594 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:54.594 "is_configured": true, 00:13:54.594 "data_offset": 2048, 00:13:54.594 "data_size": 63488 00:13:54.595 } 00:13:54.595 ] 00:13:54.595 }' 00:13:54.595 11:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.595 11:52:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.879 11:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:54.879 11:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.879 11:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.879 [2024-11-27 11:52:21.213973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:54.879 [2024-11-27 11:52:21.214040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.879 [2024-11-27 11:52:21.214071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:54.879 [2024-11-27 11:52:21.214080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.879 [2024-11-27 11:52:21.214576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.879 [2024-11-27 11:52:21.214594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:54.879 [2024-11-27 11:52:21.214687] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:54.879 [2024-11-27 11:52:21.214700] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:54.879 [2024-11-27 11:52:21.214717] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:54.879 [2024-11-27 11:52:21.214737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:54.879 [2024-11-27 11:52:21.230302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:13:54.879 spare 00:13:54.879 11:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.879 11:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:54.879 [2024-11-27 11:52:21.232321] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.262 "name": "raid_bdev1", 00:13:56.262 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:56.262 "strip_size_kb": 0, 00:13:56.262 "state": "online", 00:13:56.262 "raid_level": "raid1", 00:13:56.262 "superblock": true, 00:13:56.262 "num_base_bdevs": 4, 00:13:56.262 "num_base_bdevs_discovered": 3, 00:13:56.262 "num_base_bdevs_operational": 3, 00:13:56.262 "process": { 00:13:56.262 "type": "rebuild", 00:13:56.262 "target": "spare", 00:13:56.262 "progress": { 00:13:56.262 "blocks": 20480, 00:13:56.262 "percent": 32 00:13:56.262 } 00:13:56.262 }, 00:13:56.262 "base_bdevs_list": [ 00:13:56.262 { 00:13:56.262 "name": "spare", 00:13:56.262 "uuid": "642a0e73-5ebd-59cf-b6a1-fe8f9d552edf", 00:13:56.262 "is_configured": true, 00:13:56.262 "data_offset": 2048, 00:13:56.262 "data_size": 63488 00:13:56.262 }, 00:13:56.262 { 00:13:56.262 "name": null, 00:13:56.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.262 "is_configured": false, 00:13:56.262 "data_offset": 2048, 00:13:56.262 "data_size": 63488 00:13:56.262 }, 00:13:56.262 { 00:13:56.262 "name": "BaseBdev3", 00:13:56.262 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:56.262 "is_configured": true, 00:13:56.262 "data_offset": 2048, 00:13:56.262 "data_size": 63488 00:13:56.262 }, 00:13:56.262 { 00:13:56.262 "name": "BaseBdev4", 00:13:56.262 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:56.262 "is_configured": true, 00:13:56.262 "data_offset": 2048, 00:13:56.262 "data_size": 63488 00:13:56.262 } 00:13:56.262 ] 00:13:56.262 }' 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.262 [2024-11-27 11:52:22.391831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:56.262 [2024-11-27 11:52:22.438193] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:56.262 [2024-11-27 11:52:22.438265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.262 [2024-11-27 11:52:22.438283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:56.262 [2024-11-27 11:52:22.438293] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.262 "name": "raid_bdev1", 00:13:56.262 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:56.262 "strip_size_kb": 0, 00:13:56.262 "state": "online", 00:13:56.262 "raid_level": "raid1", 00:13:56.262 "superblock": true, 00:13:56.262 "num_base_bdevs": 4, 00:13:56.262 "num_base_bdevs_discovered": 2, 00:13:56.262 "num_base_bdevs_operational": 2, 00:13:56.262 "base_bdevs_list": [ 00:13:56.262 { 00:13:56.262 "name": null, 00:13:56.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.262 "is_configured": false, 00:13:56.262 "data_offset": 0, 00:13:56.262 "data_size": 63488 00:13:56.262 }, 00:13:56.262 { 00:13:56.262 "name": null, 00:13:56.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.262 "is_configured": false, 00:13:56.262 "data_offset": 2048, 00:13:56.262 "data_size": 63488 00:13:56.262 }, 00:13:56.262 { 00:13:56.262 "name": "BaseBdev3", 00:13:56.262 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:56.262 "is_configured": true, 00:13:56.262 "data_offset": 2048, 00:13:56.262 "data_size": 63488 00:13:56.262 }, 00:13:56.262 { 00:13:56.262 "name": "BaseBdev4", 00:13:56.262 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:56.262 "is_configured": true, 00:13:56.262 "data_offset": 2048, 00:13:56.262 "data_size": 63488 00:13:56.262 } 00:13:56.262 ] 00:13:56.262 }' 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.262 11:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.830 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:56.831 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.831 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:56.831 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:56.831 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.831 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.831 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.831 11:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.831 11:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.831 11:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.831 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.831 "name": "raid_bdev1", 00:13:56.831 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:56.831 "strip_size_kb": 0, 00:13:56.831 "state": "online", 00:13:56.831 "raid_level": "raid1", 00:13:56.831 "superblock": true, 00:13:56.831 "num_base_bdevs": 4, 00:13:56.831 "num_base_bdevs_discovered": 2, 00:13:56.831 "num_base_bdevs_operational": 2, 00:13:56.831 "base_bdevs_list": [ 00:13:56.831 { 00:13:56.831 "name": null, 00:13:56.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.831 "is_configured": false, 00:13:56.831 "data_offset": 0, 00:13:56.831 "data_size": 63488 00:13:56.831 }, 00:13:56.831 { 00:13:56.831 "name": null, 00:13:56.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.831 "is_configured": false, 00:13:56.831 "data_offset": 2048, 00:13:56.831 "data_size": 63488 00:13:56.831 }, 00:13:56.831 { 00:13:56.831 "name": "BaseBdev3", 00:13:56.831 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:56.831 "is_configured": true, 00:13:56.831 "data_offset": 2048, 00:13:56.831 "data_size": 63488 00:13:56.831 }, 00:13:56.831 { 00:13:56.831 "name": "BaseBdev4", 00:13:56.831 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:56.831 "is_configured": true, 00:13:56.831 "data_offset": 2048, 00:13:56.831 "data_size": 63488 00:13:56.831 } 00:13:56.831 ] 00:13:56.831 }' 00:13:56.831 11:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.831 11:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:56.831 11:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.831 11:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:56.831 11:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:56.831 11:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.831 11:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.831 11:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.831 11:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:56.831 11:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.831 11:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.831 [2024-11-27 11:52:23.058586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:56.831 [2024-11-27 11:52:23.058651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.831 [2024-11-27 11:52:23.058673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:56.831 [2024-11-27 11:52:23.058685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.831 [2024-11-27 11:52:23.059209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.831 [2024-11-27 11:52:23.059233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:56.831 [2024-11-27 11:52:23.059322] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:56.831 [2024-11-27 11:52:23.059341] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:56.831 [2024-11-27 11:52:23.059350] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:56.831 [2024-11-27 11:52:23.059377] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:56.831 BaseBdev1 00:13:56.831 11:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.831 11:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:57.766 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:57.766 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.766 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.766 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.766 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.766 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:57.766 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.766 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.766 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.766 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.766 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.766 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.766 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.766 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.766 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.766 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.766 "name": "raid_bdev1", 00:13:57.766 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:57.766 "strip_size_kb": 0, 00:13:57.766 "state": "online", 00:13:57.766 "raid_level": "raid1", 00:13:57.766 "superblock": true, 00:13:57.766 "num_base_bdevs": 4, 00:13:57.766 "num_base_bdevs_discovered": 2, 00:13:57.766 "num_base_bdevs_operational": 2, 00:13:57.766 "base_bdevs_list": [ 00:13:57.766 { 00:13:57.766 "name": null, 00:13:57.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.766 "is_configured": false, 00:13:57.766 "data_offset": 0, 00:13:57.766 "data_size": 63488 00:13:57.766 }, 00:13:57.766 { 00:13:57.766 "name": null, 00:13:57.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.766 "is_configured": false, 00:13:57.766 "data_offset": 2048, 00:13:57.766 "data_size": 63488 00:13:57.766 }, 00:13:57.766 { 00:13:57.766 "name": "BaseBdev3", 00:13:57.766 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:57.766 "is_configured": true, 00:13:57.766 "data_offset": 2048, 00:13:57.766 "data_size": 63488 00:13:57.766 }, 00:13:57.766 { 00:13:57.766 "name": "BaseBdev4", 00:13:57.766 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:57.766 "is_configured": true, 00:13:57.766 "data_offset": 2048, 00:13:57.766 "data_size": 63488 00:13:57.766 } 00:13:57.766 ] 00:13:57.766 }' 00:13:57.766 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.766 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.335 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.335 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.335 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.335 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.335 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.335 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.335 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.335 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.335 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.335 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.335 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.335 "name": "raid_bdev1", 00:13:58.335 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:58.335 "strip_size_kb": 0, 00:13:58.335 "state": "online", 00:13:58.335 "raid_level": "raid1", 00:13:58.335 "superblock": true, 00:13:58.335 "num_base_bdevs": 4, 00:13:58.335 "num_base_bdevs_discovered": 2, 00:13:58.335 "num_base_bdevs_operational": 2, 00:13:58.335 "base_bdevs_list": [ 00:13:58.335 { 00:13:58.335 "name": null, 00:13:58.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.335 "is_configured": false, 00:13:58.335 "data_offset": 0, 00:13:58.335 "data_size": 63488 00:13:58.335 }, 00:13:58.335 { 00:13:58.335 "name": null, 00:13:58.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.335 "is_configured": false, 00:13:58.335 "data_offset": 2048, 00:13:58.335 "data_size": 63488 00:13:58.335 }, 00:13:58.335 { 00:13:58.335 "name": "BaseBdev3", 00:13:58.335 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:58.335 "is_configured": true, 00:13:58.335 "data_offset": 2048, 00:13:58.335 "data_size": 63488 00:13:58.335 }, 00:13:58.335 { 00:13:58.335 "name": "BaseBdev4", 00:13:58.335 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:58.336 "is_configured": true, 00:13:58.336 "data_offset": 2048, 00:13:58.336 "data_size": 63488 00:13:58.336 } 00:13:58.336 ] 00:13:58.336 }' 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.336 [2024-11-27 11:52:24.639950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:58.336 [2024-11-27 11:52:24.640161] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:58.336 [2024-11-27 11:52:24.640182] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:58.336 request: 00:13:58.336 { 00:13:58.336 "base_bdev": "BaseBdev1", 00:13:58.336 "raid_bdev": "raid_bdev1", 00:13:58.336 "method": "bdev_raid_add_base_bdev", 00:13:58.336 "req_id": 1 00:13:58.336 } 00:13:58.336 Got JSON-RPC error response 00:13:58.336 response: 00:13:58.336 { 00:13:58.336 "code": -22, 00:13:58.336 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:58.336 } 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:58.336 11:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:59.274 11:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:59.274 11:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.274 11:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.274 11:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.274 11:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.274 11:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:59.274 11:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.274 11:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.539 11:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.539 11:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.539 11:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.539 11:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.539 11:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.539 11:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.539 11:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.539 11:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.539 "name": "raid_bdev1", 00:13:59.539 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:59.539 "strip_size_kb": 0, 00:13:59.539 "state": "online", 00:13:59.539 "raid_level": "raid1", 00:13:59.539 "superblock": true, 00:13:59.539 "num_base_bdevs": 4, 00:13:59.539 "num_base_bdevs_discovered": 2, 00:13:59.539 "num_base_bdevs_operational": 2, 00:13:59.539 "base_bdevs_list": [ 00:13:59.539 { 00:13:59.539 "name": null, 00:13:59.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.539 "is_configured": false, 00:13:59.539 "data_offset": 0, 00:13:59.539 "data_size": 63488 00:13:59.539 }, 00:13:59.539 { 00:13:59.539 "name": null, 00:13:59.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.539 "is_configured": false, 00:13:59.539 "data_offset": 2048, 00:13:59.539 "data_size": 63488 00:13:59.539 }, 00:13:59.539 { 00:13:59.539 "name": "BaseBdev3", 00:13:59.539 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:59.539 "is_configured": true, 00:13:59.539 "data_offset": 2048, 00:13:59.539 "data_size": 63488 00:13:59.539 }, 00:13:59.539 { 00:13:59.539 "name": "BaseBdev4", 00:13:59.539 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:59.539 "is_configured": true, 00:13:59.539 "data_offset": 2048, 00:13:59.539 "data_size": 63488 00:13:59.539 } 00:13:59.539 ] 00:13:59.539 }' 00:13:59.539 11:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.539 11:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.819 11:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:59.819 11:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.819 11:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:59.819 11:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:59.819 11:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.819 11:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.819 11:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.819 11:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.819 11:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.819 11:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.819 11:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.819 "name": "raid_bdev1", 00:13:59.820 "uuid": "7d6b6e73-d210-4913-905a-4c60019ddcd6", 00:13:59.820 "strip_size_kb": 0, 00:13:59.820 "state": "online", 00:13:59.820 "raid_level": "raid1", 00:13:59.820 "superblock": true, 00:13:59.820 "num_base_bdevs": 4, 00:13:59.820 "num_base_bdevs_discovered": 2, 00:13:59.820 "num_base_bdevs_operational": 2, 00:13:59.820 "base_bdevs_list": [ 00:13:59.820 { 00:13:59.820 "name": null, 00:13:59.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.820 "is_configured": false, 00:13:59.820 "data_offset": 0, 00:13:59.820 "data_size": 63488 00:13:59.820 }, 00:13:59.820 { 00:13:59.820 "name": null, 00:13:59.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.820 "is_configured": false, 00:13:59.820 "data_offset": 2048, 00:13:59.820 "data_size": 63488 00:13:59.820 }, 00:13:59.820 { 00:13:59.820 "name": "BaseBdev3", 00:13:59.820 "uuid": "c2905c37-d053-5593-b343-19b8f5c9efbe", 00:13:59.820 "is_configured": true, 00:13:59.820 "data_offset": 2048, 00:13:59.820 "data_size": 63488 00:13:59.820 }, 00:13:59.820 { 00:13:59.820 "name": "BaseBdev4", 00:13:59.820 "uuid": "1fb48c01-e699-56b1-9363-1e8a08aafefe", 00:13:59.820 "is_configured": true, 00:13:59.820 "data_offset": 2048, 00:13:59.820 "data_size": 63488 00:13:59.820 } 00:13:59.820 ] 00:13:59.820 }' 00:13:59.820 11:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.080 11:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:00.080 11:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.080 11:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:00.080 11:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77999 00:14:00.080 11:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77999 ']' 00:14:00.080 11:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77999 00:14:00.080 11:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:00.080 11:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.080 11:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77999 00:14:00.080 killing process with pid 77999 00:14:00.080 Received shutdown signal, test time was about 60.000000 seconds 00:14:00.080 00:14:00.080 Latency(us) 00:14:00.080 [2024-11-27T11:52:26.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.080 [2024-11-27T11:52:26.465Z] =================================================================================================================== 00:14:00.080 [2024-11-27T11:52:26.465Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:00.080 11:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:00.080 11:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:00.080 11:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77999' 00:14:00.080 11:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77999 00:14:00.080 [2024-11-27 11:52:26.316069] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:00.080 [2024-11-27 11:52:26.316203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.080 11:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77999 00:14:00.080 [2024-11-27 11:52:26.316281] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.080 [2024-11-27 11:52:26.316293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:00.650 [2024-11-27 11:52:26.825407] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:02.028 11:52:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:02.028 00:14:02.028 real 0m25.381s 00:14:02.028 user 0m31.095s 00:14:02.028 sys 0m3.612s 00:14:02.028 11:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.028 11:52:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.028 ************************************ 00:14:02.028 END TEST raid_rebuild_test_sb 00:14:02.028 ************************************ 00:14:02.028 11:52:28 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:02.028 11:52:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:02.028 11:52:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.028 11:52:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:02.028 ************************************ 00:14:02.028 START TEST raid_rebuild_test_io 00:14:02.028 ************************************ 00:14:02.028 11:52:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:02.028 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:02.028 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:02.028 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:02.028 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:02.028 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:02.028 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78764 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78764 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78764 ']' 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.029 11:52:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.029 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:02.029 Zero copy mechanism will not be used. 00:14:02.029 [2024-11-27 11:52:28.172664] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:14:02.029 [2024-11-27 11:52:28.172825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78764 ] 00:14:02.029 [2024-11-27 11:52:28.353957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.288 [2024-11-27 11:52:28.467524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.288 [2024-11-27 11:52:28.669526] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.288 [2024-11-27 11:52:28.669562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.858 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.858 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:02.858 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:02.858 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:02.858 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.858 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.858 BaseBdev1_malloc 00:14:02.858 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.858 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:02.858 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.858 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.858 [2024-11-27 11:52:29.094123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:02.858 [2024-11-27 11:52:29.094187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.858 [2024-11-27 11:52:29.094211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:02.858 [2024-11-27 11:52:29.094224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.859 [2024-11-27 11:52:29.096325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.859 [2024-11-27 11:52:29.096366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:02.859 BaseBdev1 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.859 BaseBdev2_malloc 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.859 [2024-11-27 11:52:29.147432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:02.859 [2024-11-27 11:52:29.147501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.859 [2024-11-27 11:52:29.147542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:02.859 [2024-11-27 11:52:29.147554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.859 [2024-11-27 11:52:29.149615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.859 [2024-11-27 11:52:29.149653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:02.859 BaseBdev2 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.859 BaseBdev3_malloc 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.859 [2024-11-27 11:52:29.215044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:02.859 [2024-11-27 11:52:29.215099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.859 [2024-11-27 11:52:29.215124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:02.859 [2024-11-27 11:52:29.215134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.859 [2024-11-27 11:52:29.217191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.859 [2024-11-27 11:52:29.217232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:02.859 BaseBdev3 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.859 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.118 BaseBdev4_malloc 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.118 [2024-11-27 11:52:29.269501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:03.118 [2024-11-27 11:52:29.269563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.118 [2024-11-27 11:52:29.269601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:03.118 [2024-11-27 11:52:29.269612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.118 [2024-11-27 11:52:29.271721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.118 [2024-11-27 11:52:29.271764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:03.118 BaseBdev4 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.118 spare_malloc 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.118 spare_delay 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.118 [2024-11-27 11:52:29.335333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:03.118 [2024-11-27 11:52:29.335384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.118 [2024-11-27 11:52:29.335403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:03.118 [2024-11-27 11:52:29.335414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.118 [2024-11-27 11:52:29.337588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.118 [2024-11-27 11:52:29.337625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:03.118 spare 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.118 [2024-11-27 11:52:29.347358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:03.118 [2024-11-27 11:52:29.349289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:03.118 [2024-11-27 11:52:29.349354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:03.118 [2024-11-27 11:52:29.349405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:03.118 [2024-11-27 11:52:29.349482] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:03.118 [2024-11-27 11:52:29.349502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:03.118 [2024-11-27 11:52:29.349745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:03.118 [2024-11-27 11:52:29.349927] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:03.118 [2024-11-27 11:52:29.349948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:03.118 [2024-11-27 11:52:29.350104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.118 "name": "raid_bdev1", 00:14:03.118 "uuid": "7e45658f-a0a3-45d3-b00c-4df677622a64", 00:14:03.118 "strip_size_kb": 0, 00:14:03.118 "state": "online", 00:14:03.118 "raid_level": "raid1", 00:14:03.118 "superblock": false, 00:14:03.118 "num_base_bdevs": 4, 00:14:03.118 "num_base_bdevs_discovered": 4, 00:14:03.118 "num_base_bdevs_operational": 4, 00:14:03.118 "base_bdevs_list": [ 00:14:03.118 { 00:14:03.118 "name": "BaseBdev1", 00:14:03.118 "uuid": "f3fd6e56-f8ee-58b9-8638-0fa171a11ea7", 00:14:03.118 "is_configured": true, 00:14:03.118 "data_offset": 0, 00:14:03.118 "data_size": 65536 00:14:03.118 }, 00:14:03.118 { 00:14:03.118 "name": "BaseBdev2", 00:14:03.118 "uuid": "abab7e43-fd4b-5b09-8292-529fbb812b36", 00:14:03.118 "is_configured": true, 00:14:03.118 "data_offset": 0, 00:14:03.118 "data_size": 65536 00:14:03.118 }, 00:14:03.118 { 00:14:03.118 "name": "BaseBdev3", 00:14:03.118 "uuid": "2a377918-25c4-56ed-b6dc-a19047aed060", 00:14:03.118 "is_configured": true, 00:14:03.118 "data_offset": 0, 00:14:03.118 "data_size": 65536 00:14:03.118 }, 00:14:03.118 { 00:14:03.118 "name": "BaseBdev4", 00:14:03.118 "uuid": "14bb936d-a9bf-5676-bbed-1b42028199b0", 00:14:03.118 "is_configured": true, 00:14:03.118 "data_offset": 0, 00:14:03.118 "data_size": 65536 00:14:03.118 } 00:14:03.118 ] 00:14:03.118 }' 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.118 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.377 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:03.377 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:03.377 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.377 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.639 [2024-11-27 11:52:29.763011] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.639 [2024-11-27 11:52:29.854481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.639 "name": "raid_bdev1", 00:14:03.639 "uuid": "7e45658f-a0a3-45d3-b00c-4df677622a64", 00:14:03.639 "strip_size_kb": 0, 00:14:03.639 "state": "online", 00:14:03.639 "raid_level": "raid1", 00:14:03.639 "superblock": false, 00:14:03.639 "num_base_bdevs": 4, 00:14:03.639 "num_base_bdevs_discovered": 3, 00:14:03.639 "num_base_bdevs_operational": 3, 00:14:03.639 "base_bdevs_list": [ 00:14:03.639 { 00:14:03.639 "name": null, 00:14:03.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.639 "is_configured": false, 00:14:03.639 "data_offset": 0, 00:14:03.639 "data_size": 65536 00:14:03.639 }, 00:14:03.639 { 00:14:03.639 "name": "BaseBdev2", 00:14:03.639 "uuid": "abab7e43-fd4b-5b09-8292-529fbb812b36", 00:14:03.639 "is_configured": true, 00:14:03.639 "data_offset": 0, 00:14:03.639 "data_size": 65536 00:14:03.639 }, 00:14:03.639 { 00:14:03.639 "name": "BaseBdev3", 00:14:03.639 "uuid": "2a377918-25c4-56ed-b6dc-a19047aed060", 00:14:03.639 "is_configured": true, 00:14:03.639 "data_offset": 0, 00:14:03.639 "data_size": 65536 00:14:03.639 }, 00:14:03.639 { 00:14:03.639 "name": "BaseBdev4", 00:14:03.639 "uuid": "14bb936d-a9bf-5676-bbed-1b42028199b0", 00:14:03.639 "is_configured": true, 00:14:03.639 "data_offset": 0, 00:14:03.639 "data_size": 65536 00:14:03.639 } 00:14:03.639 ] 00:14:03.639 }' 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.639 11:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.639 [2024-11-27 11:52:29.953735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:03.640 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:03.640 Zero copy mechanism will not be used. 00:14:03.640 Running I/O for 60 seconds... 00:14:04.209 11:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:04.209 11:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.209 11:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.209 [2024-11-27 11:52:30.297649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:04.209 11:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.209 11:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:04.209 [2024-11-27 11:52:30.374878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:04.209 [2024-11-27 11:52:30.377022] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:04.209 [2024-11-27 11:52:30.500516] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:04.209 [2024-11-27 11:52:30.501121] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:04.468 [2024-11-27 11:52:30.624913] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:04.468 [2024-11-27 11:52:30.625245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:04.727 [2024-11-27 11:52:30.881400] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:04.727 [2024-11-27 11:52:30.881969] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:04.727 201.00 IOPS, 603.00 MiB/s [2024-11-27T11:52:31.112Z] [2024-11-27 11:52:31.087294] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:04.986 [2024-11-27 11:52:31.306483] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:04.986 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.986 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.986 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.986 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.986 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.987 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.987 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.987 11:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.987 11:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.247 "name": "raid_bdev1", 00:14:05.247 "uuid": "7e45658f-a0a3-45d3-b00c-4df677622a64", 00:14:05.247 "strip_size_kb": 0, 00:14:05.247 "state": "online", 00:14:05.247 "raid_level": "raid1", 00:14:05.247 "superblock": false, 00:14:05.247 "num_base_bdevs": 4, 00:14:05.247 "num_base_bdevs_discovered": 4, 00:14:05.247 "num_base_bdevs_operational": 4, 00:14:05.247 "process": { 00:14:05.247 "type": "rebuild", 00:14:05.247 "target": "spare", 00:14:05.247 "progress": { 00:14:05.247 "blocks": 14336, 00:14:05.247 "percent": 21 00:14:05.247 } 00:14:05.247 }, 00:14:05.247 "base_bdevs_list": [ 00:14:05.247 { 00:14:05.247 "name": "spare", 00:14:05.247 "uuid": "4cd71374-9ac3-51eb-805d-310be0d1a295", 00:14:05.247 "is_configured": true, 00:14:05.247 "data_offset": 0, 00:14:05.247 "data_size": 65536 00:14:05.247 }, 00:14:05.247 { 00:14:05.247 "name": "BaseBdev2", 00:14:05.247 "uuid": "abab7e43-fd4b-5b09-8292-529fbb812b36", 00:14:05.247 "is_configured": true, 00:14:05.247 "data_offset": 0, 00:14:05.247 "data_size": 65536 00:14:05.247 }, 00:14:05.247 { 00:14:05.247 "name": "BaseBdev3", 00:14:05.247 "uuid": "2a377918-25c4-56ed-b6dc-a19047aed060", 00:14:05.247 "is_configured": true, 00:14:05.247 "data_offset": 0, 00:14:05.247 "data_size": 65536 00:14:05.247 }, 00:14:05.247 { 00:14:05.247 "name": "BaseBdev4", 00:14:05.247 "uuid": "14bb936d-a9bf-5676-bbed-1b42028199b0", 00:14:05.247 "is_configured": true, 00:14:05.247 "data_offset": 0, 00:14:05.247 "data_size": 65536 00:14:05.247 } 00:14:05.247 ] 00:14:05.247 }' 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.247 [2024-11-27 11:52:31.496390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.247 [2024-11-27 11:52:31.533409] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:05.247 [2024-11-27 11:52:31.540320] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:05.247 [2024-11-27 11:52:31.551282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.247 [2024-11-27 11:52:31.551359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.247 [2024-11-27 11:52:31.551374] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:05.247 [2024-11-27 11:52:31.583805] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.247 11:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.507 11:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.507 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.507 "name": "raid_bdev1", 00:14:05.507 "uuid": "7e45658f-a0a3-45d3-b00c-4df677622a64", 00:14:05.507 "strip_size_kb": 0, 00:14:05.507 "state": "online", 00:14:05.507 "raid_level": "raid1", 00:14:05.507 "superblock": false, 00:14:05.507 "num_base_bdevs": 4, 00:14:05.507 "num_base_bdevs_discovered": 3, 00:14:05.507 "num_base_bdevs_operational": 3, 00:14:05.507 "base_bdevs_list": [ 00:14:05.507 { 00:14:05.507 "name": null, 00:14:05.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.507 "is_configured": false, 00:14:05.507 "data_offset": 0, 00:14:05.507 "data_size": 65536 00:14:05.507 }, 00:14:05.507 { 00:14:05.507 "name": "BaseBdev2", 00:14:05.507 "uuid": "abab7e43-fd4b-5b09-8292-529fbb812b36", 00:14:05.507 "is_configured": true, 00:14:05.507 "data_offset": 0, 00:14:05.507 "data_size": 65536 00:14:05.507 }, 00:14:05.507 { 00:14:05.507 "name": "BaseBdev3", 00:14:05.507 "uuid": "2a377918-25c4-56ed-b6dc-a19047aed060", 00:14:05.507 "is_configured": true, 00:14:05.507 "data_offset": 0, 00:14:05.507 "data_size": 65536 00:14:05.507 }, 00:14:05.507 { 00:14:05.507 "name": "BaseBdev4", 00:14:05.507 "uuid": "14bb936d-a9bf-5676-bbed-1b42028199b0", 00:14:05.507 "is_configured": true, 00:14:05.507 "data_offset": 0, 00:14:05.507 "data_size": 65536 00:14:05.508 } 00:14:05.508 ] 00:14:05.508 }' 00:14:05.508 11:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.508 11:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.767 173.50 IOPS, 520.50 MiB/s [2024-11-27T11:52:32.153Z] 11:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:05.768 11:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.768 11:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:05.768 11:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:05.768 11:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.768 11:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.768 11:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.768 11:52:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.768 11:52:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.768 11:52:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.768 11:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.768 "name": "raid_bdev1", 00:14:05.768 "uuid": "7e45658f-a0a3-45d3-b00c-4df677622a64", 00:14:05.768 "strip_size_kb": 0, 00:14:05.768 "state": "online", 00:14:05.768 "raid_level": "raid1", 00:14:05.768 "superblock": false, 00:14:05.768 "num_base_bdevs": 4, 00:14:05.768 "num_base_bdevs_discovered": 3, 00:14:05.768 "num_base_bdevs_operational": 3, 00:14:05.768 "base_bdevs_list": [ 00:14:05.768 { 00:14:05.768 "name": null, 00:14:05.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.768 "is_configured": false, 00:14:05.768 "data_offset": 0, 00:14:05.768 "data_size": 65536 00:14:05.768 }, 00:14:05.768 { 00:14:05.768 "name": "BaseBdev2", 00:14:05.768 "uuid": "abab7e43-fd4b-5b09-8292-529fbb812b36", 00:14:05.768 "is_configured": true, 00:14:05.768 "data_offset": 0, 00:14:05.768 "data_size": 65536 00:14:05.768 }, 00:14:05.768 { 00:14:05.768 "name": "BaseBdev3", 00:14:05.768 "uuid": "2a377918-25c4-56ed-b6dc-a19047aed060", 00:14:05.768 "is_configured": true, 00:14:05.768 "data_offset": 0, 00:14:05.768 "data_size": 65536 00:14:05.768 }, 00:14:05.768 { 00:14:05.768 "name": "BaseBdev4", 00:14:05.768 "uuid": "14bb936d-a9bf-5676-bbed-1b42028199b0", 00:14:05.768 "is_configured": true, 00:14:05.768 "data_offset": 0, 00:14:05.768 "data_size": 65536 00:14:05.768 } 00:14:05.768 ] 00:14:05.768 }' 00:14:05.768 11:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.768 11:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:05.768 11:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.027 11:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.027 11:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:06.027 11:52:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.027 11:52:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.027 [2024-11-27 11:52:32.196163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:06.027 11:52:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.027 11:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:06.027 [2024-11-27 11:52:32.255482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:06.027 [2024-11-27 11:52:32.257587] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:06.027 [2024-11-27 11:52:32.382151] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:06.027 [2024-11-27 11:52:32.383649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:06.287 [2024-11-27 11:52:32.601960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:06.287 [2024-11-27 11:52:32.602293] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:06.855 157.67 IOPS, 473.00 MiB/s [2024-11-27T11:52:33.240Z] [2024-11-27 11:52:32.961828] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:06.855 [2024-11-27 11:52:33.093035] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:07.114 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.114 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.114 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.114 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.114 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.114 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.114 11:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.114 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.114 11:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.114 11:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.114 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.114 "name": "raid_bdev1", 00:14:07.114 "uuid": "7e45658f-a0a3-45d3-b00c-4df677622a64", 00:14:07.114 "strip_size_kb": 0, 00:14:07.114 "state": "online", 00:14:07.114 "raid_level": "raid1", 00:14:07.114 "superblock": false, 00:14:07.114 "num_base_bdevs": 4, 00:14:07.114 "num_base_bdevs_discovered": 4, 00:14:07.114 "num_base_bdevs_operational": 4, 00:14:07.114 "process": { 00:14:07.114 "type": "rebuild", 00:14:07.114 "target": "spare", 00:14:07.114 "progress": { 00:14:07.114 "blocks": 12288, 00:14:07.114 "percent": 18 00:14:07.114 } 00:14:07.114 }, 00:14:07.114 "base_bdevs_list": [ 00:14:07.114 { 00:14:07.114 "name": "spare", 00:14:07.114 "uuid": "4cd71374-9ac3-51eb-805d-310be0d1a295", 00:14:07.114 "is_configured": true, 00:14:07.114 "data_offset": 0, 00:14:07.114 "data_size": 65536 00:14:07.114 }, 00:14:07.114 { 00:14:07.114 "name": "BaseBdev2", 00:14:07.114 "uuid": "abab7e43-fd4b-5b09-8292-529fbb812b36", 00:14:07.114 "is_configured": true, 00:14:07.114 "data_offset": 0, 00:14:07.114 "data_size": 65536 00:14:07.114 }, 00:14:07.114 { 00:14:07.115 "name": "BaseBdev3", 00:14:07.115 "uuid": "2a377918-25c4-56ed-b6dc-a19047aed060", 00:14:07.115 "is_configured": true, 00:14:07.115 "data_offset": 0, 00:14:07.115 "data_size": 65536 00:14:07.115 }, 00:14:07.115 { 00:14:07.115 "name": "BaseBdev4", 00:14:07.115 "uuid": "14bb936d-a9bf-5676-bbed-1b42028199b0", 00:14:07.115 "is_configured": true, 00:14:07.115 "data_offset": 0, 00:14:07.115 "data_size": 65536 00:14:07.115 } 00:14:07.115 ] 00:14:07.115 }' 00:14:07.115 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.115 [2024-11-27 11:52:33.326206] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:07.115 [2024-11-27 11:52:33.326820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:07.115 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.115 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.115 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.115 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:07.115 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:07.115 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:07.115 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:07.115 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:07.115 11:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.115 11:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.115 [2024-11-27 11:52:33.405938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:07.374 [2024-11-27 11:52:33.545130] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:07.374 [2024-11-27 11:52:33.654059] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:07.374 [2024-11-27 11:52:33.654115] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:07.374 11:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.374 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:07.374 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:07.374 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.374 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.374 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.374 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.374 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.374 [2024-11-27 11:52:33.665068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:07.374 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.374 11:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.374 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.374 11:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.374 11:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.374 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.374 "name": "raid_bdev1", 00:14:07.374 "uuid": "7e45658f-a0a3-45d3-b00c-4df677622a64", 00:14:07.374 "strip_size_kb": 0, 00:14:07.374 "state": "online", 00:14:07.374 "raid_level": "raid1", 00:14:07.374 "superblock": false, 00:14:07.374 "num_base_bdevs": 4, 00:14:07.374 "num_base_bdevs_discovered": 3, 00:14:07.374 "num_base_bdevs_operational": 3, 00:14:07.374 "process": { 00:14:07.374 "type": "rebuild", 00:14:07.374 "target": "spare", 00:14:07.374 "progress": { 00:14:07.374 "blocks": 16384, 00:14:07.374 "percent": 25 00:14:07.374 } 00:14:07.374 }, 00:14:07.374 "base_bdevs_list": [ 00:14:07.374 { 00:14:07.374 "name": "spare", 00:14:07.374 "uuid": "4cd71374-9ac3-51eb-805d-310be0d1a295", 00:14:07.374 "is_configured": true, 00:14:07.374 "data_offset": 0, 00:14:07.374 "data_size": 65536 00:14:07.374 }, 00:14:07.374 { 00:14:07.374 "name": null, 00:14:07.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.374 "is_configured": false, 00:14:07.374 "data_offset": 0, 00:14:07.374 "data_size": 65536 00:14:07.374 }, 00:14:07.374 { 00:14:07.374 "name": "BaseBdev3", 00:14:07.374 "uuid": "2a377918-25c4-56ed-b6dc-a19047aed060", 00:14:07.374 "is_configured": true, 00:14:07.374 "data_offset": 0, 00:14:07.374 "data_size": 65536 00:14:07.374 }, 00:14:07.374 { 00:14:07.374 "name": "BaseBdev4", 00:14:07.374 "uuid": "14bb936d-a9bf-5676-bbed-1b42028199b0", 00:14:07.374 "is_configured": true, 00:14:07.374 "data_offset": 0, 00:14:07.374 "data_size": 65536 00:14:07.374 } 00:14:07.374 ] 00:14:07.374 }' 00:14:07.374 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.374 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.374 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.633 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.633 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=488 00:14:07.633 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:07.633 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.633 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.633 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.633 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.633 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.633 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.633 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.633 11:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.633 11:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.633 11:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.633 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.633 "name": "raid_bdev1", 00:14:07.633 "uuid": "7e45658f-a0a3-45d3-b00c-4df677622a64", 00:14:07.633 "strip_size_kb": 0, 00:14:07.633 "state": "online", 00:14:07.633 "raid_level": "raid1", 00:14:07.633 "superblock": false, 00:14:07.633 "num_base_bdevs": 4, 00:14:07.633 "num_base_bdevs_discovered": 3, 00:14:07.633 "num_base_bdevs_operational": 3, 00:14:07.633 "process": { 00:14:07.633 "type": "rebuild", 00:14:07.633 "target": "spare", 00:14:07.633 "progress": { 00:14:07.633 "blocks": 16384, 00:14:07.633 "percent": 25 00:14:07.633 } 00:14:07.633 }, 00:14:07.633 "base_bdevs_list": [ 00:14:07.633 { 00:14:07.633 "name": "spare", 00:14:07.633 "uuid": "4cd71374-9ac3-51eb-805d-310be0d1a295", 00:14:07.633 "is_configured": true, 00:14:07.633 "data_offset": 0, 00:14:07.633 "data_size": 65536 00:14:07.633 }, 00:14:07.633 { 00:14:07.633 "name": null, 00:14:07.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.633 "is_configured": false, 00:14:07.633 "data_offset": 0, 00:14:07.633 "data_size": 65536 00:14:07.633 }, 00:14:07.633 { 00:14:07.633 "name": "BaseBdev3", 00:14:07.634 "uuid": "2a377918-25c4-56ed-b6dc-a19047aed060", 00:14:07.634 "is_configured": true, 00:14:07.634 "data_offset": 0, 00:14:07.634 "data_size": 65536 00:14:07.634 }, 00:14:07.634 { 00:14:07.634 "name": "BaseBdev4", 00:14:07.634 "uuid": "14bb936d-a9bf-5676-bbed-1b42028199b0", 00:14:07.634 "is_configured": true, 00:14:07.634 "data_offset": 0, 00:14:07.634 "data_size": 65536 00:14:07.634 } 00:14:07.634 ] 00:14:07.634 }' 00:14:07.634 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.634 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.634 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.634 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.634 11:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:07.892 131.75 IOPS, 395.25 MiB/s [2024-11-27T11:52:34.277Z] [2024-11-27 11:52:34.020915] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:07.892 [2024-11-27 11:52:34.022030] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:07.892 [2024-11-27 11:52:34.248935] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:08.852 11:52:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:08.853 11:52:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.853 11:52:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.853 11:52:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.853 11:52:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.853 11:52:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.853 11:52:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.853 11:52:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.853 11:52:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.853 11:52:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.853 115.40 IOPS, 346.20 MiB/s [2024-11-27T11:52:35.238Z] 11:52:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.853 11:52:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.853 "name": "raid_bdev1", 00:14:08.853 "uuid": "7e45658f-a0a3-45d3-b00c-4df677622a64", 00:14:08.853 "strip_size_kb": 0, 00:14:08.853 "state": "online", 00:14:08.853 "raid_level": "raid1", 00:14:08.853 "superblock": false, 00:14:08.853 "num_base_bdevs": 4, 00:14:08.853 "num_base_bdevs_discovered": 3, 00:14:08.853 "num_base_bdevs_operational": 3, 00:14:08.853 "process": { 00:14:08.853 "type": "rebuild", 00:14:08.853 "target": "spare", 00:14:08.853 "progress": { 00:14:08.853 "blocks": 32768, 00:14:08.853 "percent": 50 00:14:08.853 } 00:14:08.853 }, 00:14:08.853 "base_bdevs_list": [ 00:14:08.853 { 00:14:08.853 "name": "spare", 00:14:08.853 "uuid": "4cd71374-9ac3-51eb-805d-310be0d1a295", 00:14:08.853 "is_configured": true, 00:14:08.853 "data_offset": 0, 00:14:08.853 "data_size": 65536 00:14:08.853 }, 00:14:08.853 { 00:14:08.853 "name": null, 00:14:08.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.853 "is_configured": false, 00:14:08.853 "data_offset": 0, 00:14:08.853 "data_size": 65536 00:14:08.853 }, 00:14:08.853 { 00:14:08.853 "name": "BaseBdev3", 00:14:08.853 "uuid": "2a377918-25c4-56ed-b6dc-a19047aed060", 00:14:08.853 "is_configured": true, 00:14:08.853 "data_offset": 0, 00:14:08.853 "data_size": 65536 00:14:08.853 }, 00:14:08.853 { 00:14:08.853 "name": "BaseBdev4", 00:14:08.853 "uuid": "14bb936d-a9bf-5676-bbed-1b42028199b0", 00:14:08.853 "is_configured": true, 00:14:08.853 "data_offset": 0, 00:14:08.853 "data_size": 65536 00:14:08.853 } 00:14:08.853 ] 00:14:08.853 }' 00:14:08.853 11:52:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.853 11:52:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.853 11:52:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.853 11:52:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.853 11:52:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:09.112 [2024-11-27 11:52:35.281871] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:09.112 [2024-11-27 11:52:35.492190] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:09.941 104.50 IOPS, 313.50 MiB/s [2024-11-27T11:52:36.326Z] 11:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:09.941 11:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.941 11:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.941 11:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.941 11:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.941 11:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.941 11:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.941 11:52:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.941 11:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.941 11:52:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.941 11:52:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.941 11:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.941 "name": "raid_bdev1", 00:14:09.941 "uuid": "7e45658f-a0a3-45d3-b00c-4df677622a64", 00:14:09.941 "strip_size_kb": 0, 00:14:09.941 "state": "online", 00:14:09.941 "raid_level": "raid1", 00:14:09.941 "superblock": false, 00:14:09.941 "num_base_bdevs": 4, 00:14:09.941 "num_base_bdevs_discovered": 3, 00:14:09.941 "num_base_bdevs_operational": 3, 00:14:09.941 "process": { 00:14:09.941 "type": "rebuild", 00:14:09.941 "target": "spare", 00:14:09.941 "progress": { 00:14:09.941 "blocks": 49152, 00:14:09.941 "percent": 75 00:14:09.941 } 00:14:09.941 }, 00:14:09.941 "base_bdevs_list": [ 00:14:09.941 { 00:14:09.941 "name": "spare", 00:14:09.941 "uuid": "4cd71374-9ac3-51eb-805d-310be0d1a295", 00:14:09.941 "is_configured": true, 00:14:09.941 "data_offset": 0, 00:14:09.941 "data_size": 65536 00:14:09.941 }, 00:14:09.941 { 00:14:09.941 "name": null, 00:14:09.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.941 "is_configured": false, 00:14:09.941 "data_offset": 0, 00:14:09.941 "data_size": 65536 00:14:09.941 }, 00:14:09.941 { 00:14:09.941 "name": "BaseBdev3", 00:14:09.941 "uuid": "2a377918-25c4-56ed-b6dc-a19047aed060", 00:14:09.941 "is_configured": true, 00:14:09.941 "data_offset": 0, 00:14:09.941 "data_size": 65536 00:14:09.941 }, 00:14:09.941 { 00:14:09.941 "name": "BaseBdev4", 00:14:09.941 "uuid": "14bb936d-a9bf-5676-bbed-1b42028199b0", 00:14:09.941 "is_configured": true, 00:14:09.941 "data_offset": 0, 00:14:09.941 "data_size": 65536 00:14:09.941 } 00:14:09.941 ] 00:14:09.941 }' 00:14:09.941 11:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.941 11:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.941 11:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.941 11:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.941 11:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:10.200 [2024-11-27 11:52:36.462372] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:10.769 93.71 IOPS, 281.14 MiB/s [2024-11-27T11:52:37.154Z] [2024-11-27 11:52:37.002668] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:10.769 [2024-11-27 11:52:37.102461] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:10.769 [2024-11-27 11:52:37.104168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.029 "name": "raid_bdev1", 00:14:11.029 "uuid": "7e45658f-a0a3-45d3-b00c-4df677622a64", 00:14:11.029 "strip_size_kb": 0, 00:14:11.029 "state": "online", 00:14:11.029 "raid_level": "raid1", 00:14:11.029 "superblock": false, 00:14:11.029 "num_base_bdevs": 4, 00:14:11.029 "num_base_bdevs_discovered": 3, 00:14:11.029 "num_base_bdevs_operational": 3, 00:14:11.029 "base_bdevs_list": [ 00:14:11.029 { 00:14:11.029 "name": "spare", 00:14:11.029 "uuid": "4cd71374-9ac3-51eb-805d-310be0d1a295", 00:14:11.029 "is_configured": true, 00:14:11.029 "data_offset": 0, 00:14:11.029 "data_size": 65536 00:14:11.029 }, 00:14:11.029 { 00:14:11.029 "name": null, 00:14:11.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.029 "is_configured": false, 00:14:11.029 "data_offset": 0, 00:14:11.029 "data_size": 65536 00:14:11.029 }, 00:14:11.029 { 00:14:11.029 "name": "BaseBdev3", 00:14:11.029 "uuid": "2a377918-25c4-56ed-b6dc-a19047aed060", 00:14:11.029 "is_configured": true, 00:14:11.029 "data_offset": 0, 00:14:11.029 "data_size": 65536 00:14:11.029 }, 00:14:11.029 { 00:14:11.029 "name": "BaseBdev4", 00:14:11.029 "uuid": "14bb936d-a9bf-5676-bbed-1b42028199b0", 00:14:11.029 "is_configured": true, 00:14:11.029 "data_offset": 0, 00:14:11.029 "data_size": 65536 00:14:11.029 } 00:14:11.029 ] 00:14:11.029 }' 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.029 11:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.030 11:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.289 "name": "raid_bdev1", 00:14:11.289 "uuid": "7e45658f-a0a3-45d3-b00c-4df677622a64", 00:14:11.289 "strip_size_kb": 0, 00:14:11.289 "state": "online", 00:14:11.289 "raid_level": "raid1", 00:14:11.289 "superblock": false, 00:14:11.289 "num_base_bdevs": 4, 00:14:11.289 "num_base_bdevs_discovered": 3, 00:14:11.289 "num_base_bdevs_operational": 3, 00:14:11.289 "base_bdevs_list": [ 00:14:11.289 { 00:14:11.289 "name": "spare", 00:14:11.289 "uuid": "4cd71374-9ac3-51eb-805d-310be0d1a295", 00:14:11.289 "is_configured": true, 00:14:11.289 "data_offset": 0, 00:14:11.289 "data_size": 65536 00:14:11.289 }, 00:14:11.289 { 00:14:11.289 "name": null, 00:14:11.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.289 "is_configured": false, 00:14:11.289 "data_offset": 0, 00:14:11.289 "data_size": 65536 00:14:11.289 }, 00:14:11.289 { 00:14:11.289 "name": "BaseBdev3", 00:14:11.289 "uuid": "2a377918-25c4-56ed-b6dc-a19047aed060", 00:14:11.289 "is_configured": true, 00:14:11.289 "data_offset": 0, 00:14:11.289 "data_size": 65536 00:14:11.289 }, 00:14:11.289 { 00:14:11.289 "name": "BaseBdev4", 00:14:11.289 "uuid": "14bb936d-a9bf-5676-bbed-1b42028199b0", 00:14:11.289 "is_configured": true, 00:14:11.289 "data_offset": 0, 00:14:11.289 "data_size": 65536 00:14:11.289 } 00:14:11.289 ] 00:14:11.289 }' 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.289 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.290 "name": "raid_bdev1", 00:14:11.290 "uuid": "7e45658f-a0a3-45d3-b00c-4df677622a64", 00:14:11.290 "strip_size_kb": 0, 00:14:11.290 "state": "online", 00:14:11.290 "raid_level": "raid1", 00:14:11.290 "superblock": false, 00:14:11.290 "num_base_bdevs": 4, 00:14:11.290 "num_base_bdevs_discovered": 3, 00:14:11.290 "num_base_bdevs_operational": 3, 00:14:11.290 "base_bdevs_list": [ 00:14:11.290 { 00:14:11.290 "name": "spare", 00:14:11.290 "uuid": "4cd71374-9ac3-51eb-805d-310be0d1a295", 00:14:11.290 "is_configured": true, 00:14:11.290 "data_offset": 0, 00:14:11.290 "data_size": 65536 00:14:11.290 }, 00:14:11.290 { 00:14:11.290 "name": null, 00:14:11.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.290 "is_configured": false, 00:14:11.290 "data_offset": 0, 00:14:11.290 "data_size": 65536 00:14:11.290 }, 00:14:11.290 { 00:14:11.290 "name": "BaseBdev3", 00:14:11.290 "uuid": "2a377918-25c4-56ed-b6dc-a19047aed060", 00:14:11.290 "is_configured": true, 00:14:11.290 "data_offset": 0, 00:14:11.290 "data_size": 65536 00:14:11.290 }, 00:14:11.290 { 00:14:11.290 "name": "BaseBdev4", 00:14:11.290 "uuid": "14bb936d-a9bf-5676-bbed-1b42028199b0", 00:14:11.290 "is_configured": true, 00:14:11.290 "data_offset": 0, 00:14:11.290 "data_size": 65536 00:14:11.290 } 00:14:11.290 ] 00:14:11.290 }' 00:14:11.290 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.290 11:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.858 86.38 IOPS, 259.12 MiB/s [2024-11-27T11:52:38.243Z] 11:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:11.858 11:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.858 11:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.858 [2024-11-27 11:52:37.956106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:11.858 [2024-11-27 11:52:37.956137] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.858 00:14:11.858 Latency(us) 00:14:11.858 [2024-11-27T11:52:38.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.858 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:11.858 raid_bdev1 : 8.05 86.07 258.21 0.00 0.00 15841.78 332.69 116762.83 00:14:11.858 [2024-11-27T11:52:38.243Z] =================================================================================================================== 00:14:11.858 [2024-11-27T11:52:38.243Z] Total : 86.07 258.21 0.00 0.00 15841.78 332.69 116762.83 00:14:11.858 [2024-11-27 11:52:38.013067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.858 [2024-11-27 11:52:38.013131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.858 [2024-11-27 11:52:38.013230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.858 [2024-11-27 11:52:38.013240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:11.858 { 00:14:11.858 "results": [ 00:14:11.858 { 00:14:11.858 "job": "raid_bdev1", 00:14:11.858 "core_mask": "0x1", 00:14:11.858 "workload": "randrw", 00:14:11.858 "percentage": 50, 00:14:11.858 "status": "finished", 00:14:11.858 "queue_depth": 2, 00:14:11.858 "io_size": 3145728, 00:14:11.858 "runtime": 8.05149, 00:14:11.858 "iops": 86.07102536300735, 00:14:11.858 "mibps": 258.213076089022, 00:14:11.858 "io_failed": 0, 00:14:11.858 "io_timeout": 0, 00:14:11.858 "avg_latency_us": 15841.779491735822, 00:14:11.858 "min_latency_us": 332.6882096069869, 00:14:11.858 "max_latency_us": 116762.82969432314 00:14:11.858 } 00:14:11.858 ], 00:14:11.858 "core_count": 1 00:14:11.858 } 00:14:11.858 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.858 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:11.858 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.858 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.858 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.858 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.858 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:11.858 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:11.858 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:11.858 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:11.858 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.858 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:11.858 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:11.859 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:11.859 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:11.859 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:11.859 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:11.859 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:11.859 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:12.118 /dev/nbd0 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.118 1+0 records in 00:14:12.118 1+0 records out 00:14:12.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352288 s, 11.6 MB/s 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.118 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:12.378 /dev/nbd1 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.378 1+0 records in 00:14:12.378 1+0 records out 00:14:12.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381523 s, 10.7 MB/s 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.378 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.638 11:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:12.904 /dev/nbd1 00:14:12.904 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:12.904 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:12.904 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:12.904 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:12.904 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:12.904 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:12.904 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:12.904 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:12.904 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:12.904 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:12.904 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.904 1+0 records in 00:14:12.904 1+0 records out 00:14:12.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037298 s, 11.0 MB/s 00:14:12.905 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.905 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:12.905 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.905 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:12.905 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:12.905 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.905 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.905 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.167 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78764 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78764 ']' 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78764 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78764 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78764' 00:14:13.427 killing process with pid 78764 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78764 00:14:13.427 Received shutdown signal, test time was about 9.841531 seconds 00:14:13.427 00:14:13.427 Latency(us) 00:14:13.427 [2024-11-27T11:52:39.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.427 [2024-11-27T11:52:39.812Z] =================================================================================================================== 00:14:13.427 [2024-11-27T11:52:39.812Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:13.427 [2024-11-27 11:52:39.778575] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:13.427 11:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78764 00:14:14.005 [2024-11-27 11:52:40.210262] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:15.383 00:14:15.383 real 0m13.343s 00:14:15.383 user 0m16.860s 00:14:15.383 sys 0m1.806s 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.383 ************************************ 00:14:15.383 END TEST raid_rebuild_test_io 00:14:15.383 ************************************ 00:14:15.383 11:52:41 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:15.383 11:52:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:15.383 11:52:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.383 11:52:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:15.383 ************************************ 00:14:15.383 START TEST raid_rebuild_test_sb_io 00:14:15.383 ************************************ 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79173 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79173 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79173 ']' 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.383 11:52:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.383 [2024-11-27 11:52:41.564630] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:14:15.383 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:15.383 Zero copy mechanism will not be used. 00:14:15.383 [2024-11-27 11:52:41.565203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79173 ] 00:14:15.383 [2024-11-27 11:52:41.740664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.642 [2024-11-27 11:52:41.856772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.901 [2024-11-27 11:52:42.055718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.902 [2024-11-27 11:52:42.055751] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.161 BaseBdev1_malloc 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.161 [2024-11-27 11:52:42.442336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:16.161 [2024-11-27 11:52:42.442392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.161 [2024-11-27 11:52:42.442415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:16.161 [2024-11-27 11:52:42.442426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.161 [2024-11-27 11:52:42.444746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.161 [2024-11-27 11:52:42.444794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:16.161 BaseBdev1 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.161 BaseBdev2_malloc 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.161 [2024-11-27 11:52:42.497712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:16.161 [2024-11-27 11:52:42.497775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.161 [2024-11-27 11:52:42.497799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:16.161 [2024-11-27 11:52:42.497811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.161 [2024-11-27 11:52:42.499905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.161 [2024-11-27 11:52:42.499941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:16.161 BaseBdev2 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.161 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.421 BaseBdev3_malloc 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.421 [2024-11-27 11:52:42.568058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:16.421 [2024-11-27 11:52:42.568123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.421 [2024-11-27 11:52:42.568150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:16.421 [2024-11-27 11:52:42.568163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.421 [2024-11-27 11:52:42.570332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.421 [2024-11-27 11:52:42.570444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:16.421 BaseBdev3 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.421 BaseBdev4_malloc 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.421 [2024-11-27 11:52:42.624808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:16.421 [2024-11-27 11:52:42.624882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.421 [2024-11-27 11:52:42.624908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:16.421 [2024-11-27 11:52:42.624919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.421 [2024-11-27 11:52:42.627033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.421 [2024-11-27 11:52:42.627072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:16.421 BaseBdev4 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.421 spare_malloc 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.421 spare_delay 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.421 [2024-11-27 11:52:42.693790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:16.421 [2024-11-27 11:52:42.693855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.421 [2024-11-27 11:52:42.693874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:16.421 [2024-11-27 11:52:42.693884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.421 [2024-11-27 11:52:42.695944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.421 [2024-11-27 11:52:42.695980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:16.421 spare 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.421 [2024-11-27 11:52:42.705809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.421 [2024-11-27 11:52:42.707661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:16.421 [2024-11-27 11:52:42.707764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:16.421 [2024-11-27 11:52:42.707823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:16.421 [2024-11-27 11:52:42.708014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:16.421 [2024-11-27 11:52:42.708030] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:16.421 [2024-11-27 11:52:42.708281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:16.421 [2024-11-27 11:52:42.708450] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:16.421 [2024-11-27 11:52:42.708460] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:16.421 [2024-11-27 11:52:42.708596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.421 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.421 "name": "raid_bdev1", 00:14:16.421 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:16.421 "strip_size_kb": 0, 00:14:16.421 "state": "online", 00:14:16.421 "raid_level": "raid1", 00:14:16.421 "superblock": true, 00:14:16.421 "num_base_bdevs": 4, 00:14:16.421 "num_base_bdevs_discovered": 4, 00:14:16.421 "num_base_bdevs_operational": 4, 00:14:16.421 "base_bdevs_list": [ 00:14:16.421 { 00:14:16.421 "name": "BaseBdev1", 00:14:16.421 "uuid": "c04d554b-3488-5028-8e69-294e4c887ba3", 00:14:16.421 "is_configured": true, 00:14:16.421 "data_offset": 2048, 00:14:16.421 "data_size": 63488 00:14:16.421 }, 00:14:16.421 { 00:14:16.421 "name": "BaseBdev2", 00:14:16.421 "uuid": "cb204491-6617-532a-a7f2-cf99faec8529", 00:14:16.421 "is_configured": true, 00:14:16.421 "data_offset": 2048, 00:14:16.421 "data_size": 63488 00:14:16.421 }, 00:14:16.422 { 00:14:16.422 "name": "BaseBdev3", 00:14:16.422 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:16.422 "is_configured": true, 00:14:16.422 "data_offset": 2048, 00:14:16.422 "data_size": 63488 00:14:16.422 }, 00:14:16.422 { 00:14:16.422 "name": "BaseBdev4", 00:14:16.422 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:16.422 "is_configured": true, 00:14:16.422 "data_offset": 2048, 00:14:16.422 "data_size": 63488 00:14:16.422 } 00:14:16.422 ] 00:14:16.422 }' 00:14:16.422 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.422 11:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.989 [2024-11-27 11:52:43.157387] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.989 [2024-11-27 11:52:43.240897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.989 "name": "raid_bdev1", 00:14:16.989 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:16.989 "strip_size_kb": 0, 00:14:16.989 "state": "online", 00:14:16.989 "raid_level": "raid1", 00:14:16.989 "superblock": true, 00:14:16.989 "num_base_bdevs": 4, 00:14:16.989 "num_base_bdevs_discovered": 3, 00:14:16.989 "num_base_bdevs_operational": 3, 00:14:16.989 "base_bdevs_list": [ 00:14:16.989 { 00:14:16.989 "name": null, 00:14:16.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.989 "is_configured": false, 00:14:16.989 "data_offset": 0, 00:14:16.989 "data_size": 63488 00:14:16.989 }, 00:14:16.989 { 00:14:16.989 "name": "BaseBdev2", 00:14:16.989 "uuid": "cb204491-6617-532a-a7f2-cf99faec8529", 00:14:16.989 "is_configured": true, 00:14:16.989 "data_offset": 2048, 00:14:16.989 "data_size": 63488 00:14:16.989 }, 00:14:16.989 { 00:14:16.989 "name": "BaseBdev3", 00:14:16.989 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:16.989 "is_configured": true, 00:14:16.989 "data_offset": 2048, 00:14:16.989 "data_size": 63488 00:14:16.989 }, 00:14:16.989 { 00:14:16.989 "name": "BaseBdev4", 00:14:16.989 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:16.989 "is_configured": true, 00:14:16.989 "data_offset": 2048, 00:14:16.989 "data_size": 63488 00:14:16.989 } 00:14:16.989 ] 00:14:16.989 }' 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.989 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.989 [2024-11-27 11:52:43.336059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:16.989 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:16.989 Zero copy mechanism will not be used. 00:14:16.989 Running I/O for 60 seconds... 00:14:17.557 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:17.557 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.557 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.557 [2024-11-27 11:52:43.682296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:17.557 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.557 11:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:17.557 [2024-11-27 11:52:43.757518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:17.557 [2024-11-27 11:52:43.759715] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:17.557 [2024-11-27 11:52:43.873896] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:17.557 [2024-11-27 11:52:43.875378] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:17.817 [2024-11-27 11:52:44.088661] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:17.817 [2024-11-27 11:52:44.089074] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:18.077 119.00 IOPS, 357.00 MiB/s [2024-11-27T11:52:44.462Z] [2024-11-27 11:52:44.413307] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:18.077 [2024-11-27 11:52:44.414045] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:18.339 [2024-11-27 11:52:44.538383] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:18.602 11:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.602 11:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.602 11:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.602 11:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.602 11:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.602 11:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.602 11:52:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.602 11:52:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.602 11:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.602 11:52:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.602 11:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.602 "name": "raid_bdev1", 00:14:18.602 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:18.602 "strip_size_kb": 0, 00:14:18.602 "state": "online", 00:14:18.602 "raid_level": "raid1", 00:14:18.602 "superblock": true, 00:14:18.602 "num_base_bdevs": 4, 00:14:18.602 "num_base_bdevs_discovered": 4, 00:14:18.602 "num_base_bdevs_operational": 4, 00:14:18.602 "process": { 00:14:18.602 "type": "rebuild", 00:14:18.602 "target": "spare", 00:14:18.602 "progress": { 00:14:18.602 "blocks": 12288, 00:14:18.602 "percent": 19 00:14:18.602 } 00:14:18.602 }, 00:14:18.602 "base_bdevs_list": [ 00:14:18.602 { 00:14:18.602 "name": "spare", 00:14:18.602 "uuid": "c0aae782-842f-5e42-8441-ce089ca88722", 00:14:18.602 "is_configured": true, 00:14:18.602 "data_offset": 2048, 00:14:18.602 "data_size": 63488 00:14:18.602 }, 00:14:18.602 { 00:14:18.602 "name": "BaseBdev2", 00:14:18.602 "uuid": "cb204491-6617-532a-a7f2-cf99faec8529", 00:14:18.602 "is_configured": true, 00:14:18.602 "data_offset": 2048, 00:14:18.602 "data_size": 63488 00:14:18.602 }, 00:14:18.602 { 00:14:18.602 "name": "BaseBdev3", 00:14:18.602 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:18.602 "is_configured": true, 00:14:18.602 "data_offset": 2048, 00:14:18.602 "data_size": 63488 00:14:18.602 }, 00:14:18.602 { 00:14:18.602 "name": "BaseBdev4", 00:14:18.602 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:18.602 "is_configured": true, 00:14:18.602 "data_offset": 2048, 00:14:18.602 "data_size": 63488 00:14:18.602 } 00:14:18.602 ] 00:14:18.602 }' 00:14:18.602 11:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.602 11:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.602 11:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.602 11:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.602 11:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:18.602 11:52:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.602 [2024-11-27 11:52:44.874415] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:18.602 11:52:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.602 [2024-11-27 11:52:44.886009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:18.862 [2024-11-27 11:52:45.025275] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:18.862 [2024-11-27 11:52:45.029625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.862 [2024-11-27 11:52:45.029772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:18.862 [2024-11-27 11:52:45.029806] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:18.862 [2024-11-27 11:52:45.055186] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.862 "name": "raid_bdev1", 00:14:18.862 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:18.862 "strip_size_kb": 0, 00:14:18.862 "state": "online", 00:14:18.862 "raid_level": "raid1", 00:14:18.862 "superblock": true, 00:14:18.862 "num_base_bdevs": 4, 00:14:18.862 "num_base_bdevs_discovered": 3, 00:14:18.862 "num_base_bdevs_operational": 3, 00:14:18.862 "base_bdevs_list": [ 00:14:18.862 { 00:14:18.862 "name": null, 00:14:18.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.862 "is_configured": false, 00:14:18.862 "data_offset": 0, 00:14:18.862 "data_size": 63488 00:14:18.862 }, 00:14:18.862 { 00:14:18.862 "name": "BaseBdev2", 00:14:18.862 "uuid": "cb204491-6617-532a-a7f2-cf99faec8529", 00:14:18.862 "is_configured": true, 00:14:18.862 "data_offset": 2048, 00:14:18.862 "data_size": 63488 00:14:18.862 }, 00:14:18.862 { 00:14:18.862 "name": "BaseBdev3", 00:14:18.862 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:18.862 "is_configured": true, 00:14:18.862 "data_offset": 2048, 00:14:18.862 "data_size": 63488 00:14:18.862 }, 00:14:18.862 { 00:14:18.862 "name": "BaseBdev4", 00:14:18.862 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:18.862 "is_configured": true, 00:14:18.862 "data_offset": 2048, 00:14:18.862 "data_size": 63488 00:14:18.862 } 00:14:18.862 ] 00:14:18.862 }' 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.862 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.381 116.50 IOPS, 349.50 MiB/s [2024-11-27T11:52:45.766Z] 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.381 "name": "raid_bdev1", 00:14:19.381 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:19.381 "strip_size_kb": 0, 00:14:19.381 "state": "online", 00:14:19.381 "raid_level": "raid1", 00:14:19.381 "superblock": true, 00:14:19.381 "num_base_bdevs": 4, 00:14:19.381 "num_base_bdevs_discovered": 3, 00:14:19.381 "num_base_bdevs_operational": 3, 00:14:19.381 "base_bdevs_list": [ 00:14:19.381 { 00:14:19.381 "name": null, 00:14:19.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.381 "is_configured": false, 00:14:19.381 "data_offset": 0, 00:14:19.381 "data_size": 63488 00:14:19.381 }, 00:14:19.381 { 00:14:19.381 "name": "BaseBdev2", 00:14:19.381 "uuid": "cb204491-6617-532a-a7f2-cf99faec8529", 00:14:19.381 "is_configured": true, 00:14:19.381 "data_offset": 2048, 00:14:19.381 "data_size": 63488 00:14:19.381 }, 00:14:19.381 { 00:14:19.381 "name": "BaseBdev3", 00:14:19.381 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:19.381 "is_configured": true, 00:14:19.381 "data_offset": 2048, 00:14:19.381 "data_size": 63488 00:14:19.381 }, 00:14:19.381 { 00:14:19.381 "name": "BaseBdev4", 00:14:19.381 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:19.381 "is_configured": true, 00:14:19.381 "data_offset": 2048, 00:14:19.381 "data_size": 63488 00:14:19.381 } 00:14:19.381 ] 00:14:19.381 }' 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.381 [2024-11-27 11:52:45.680618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.381 11:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:19.381 [2024-11-27 11:52:45.722448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:19.381 [2024-11-27 11:52:45.724522] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:19.642 [2024-11-27 11:52:45.840626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:19.642 [2024-11-27 11:52:45.841242] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:19.901 [2024-11-27 11:52:46.059610] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:19.901 [2024-11-27 11:52:46.060468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:20.161 138.33 IOPS, 415.00 MiB/s [2024-11-27T11:52:46.546Z] [2024-11-27 11:52:46.536849] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:20.420 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.420 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.420 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.420 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.420 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.420 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.420 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.420 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.420 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.420 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.420 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.420 "name": "raid_bdev1", 00:14:20.420 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:20.420 "strip_size_kb": 0, 00:14:20.420 "state": "online", 00:14:20.420 "raid_level": "raid1", 00:14:20.420 "superblock": true, 00:14:20.420 "num_base_bdevs": 4, 00:14:20.420 "num_base_bdevs_discovered": 4, 00:14:20.420 "num_base_bdevs_operational": 4, 00:14:20.420 "process": { 00:14:20.420 "type": "rebuild", 00:14:20.420 "target": "spare", 00:14:20.420 "progress": { 00:14:20.420 "blocks": 10240, 00:14:20.420 "percent": 16 00:14:20.420 } 00:14:20.420 }, 00:14:20.420 "base_bdevs_list": [ 00:14:20.420 { 00:14:20.420 "name": "spare", 00:14:20.420 "uuid": "c0aae782-842f-5e42-8441-ce089ca88722", 00:14:20.420 "is_configured": true, 00:14:20.420 "data_offset": 2048, 00:14:20.420 "data_size": 63488 00:14:20.420 }, 00:14:20.420 { 00:14:20.420 "name": "BaseBdev2", 00:14:20.420 "uuid": "cb204491-6617-532a-a7f2-cf99faec8529", 00:14:20.420 "is_configured": true, 00:14:20.420 "data_offset": 2048, 00:14:20.420 "data_size": 63488 00:14:20.420 }, 00:14:20.420 { 00:14:20.420 "name": "BaseBdev3", 00:14:20.420 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:20.420 "is_configured": true, 00:14:20.420 "data_offset": 2048, 00:14:20.420 "data_size": 63488 00:14:20.420 }, 00:14:20.420 { 00:14:20.420 "name": "BaseBdev4", 00:14:20.420 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:20.420 "is_configured": true, 00:14:20.420 "data_offset": 2048, 00:14:20.420 "data_size": 63488 00:14:20.420 } 00:14:20.420 ] 00:14:20.420 }' 00:14:20.420 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.679 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.679 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.679 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.679 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:20.679 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:20.679 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:20.679 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:20.679 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:20.679 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:20.679 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:20.679 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.679 11:52:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.679 [2024-11-27 11:52:46.869773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:20.679 [2024-11-27 11:52:46.895317] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:20.679 [2024-11-27 11:52:46.902277] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:20.939 [2024-11-27 11:52:47.109914] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:20.939 [2024-11-27 11:52:47.109996] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.939 "name": "raid_bdev1", 00:14:20.939 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:20.939 "strip_size_kb": 0, 00:14:20.939 "state": "online", 00:14:20.939 "raid_level": "raid1", 00:14:20.939 "superblock": true, 00:14:20.939 "num_base_bdevs": 4, 00:14:20.939 "num_base_bdevs_discovered": 3, 00:14:20.939 "num_base_bdevs_operational": 3, 00:14:20.939 "process": { 00:14:20.939 "type": "rebuild", 00:14:20.939 "target": "spare", 00:14:20.939 "progress": { 00:14:20.939 "blocks": 14336, 00:14:20.939 "percent": 22 00:14:20.939 } 00:14:20.939 }, 00:14:20.939 "base_bdevs_list": [ 00:14:20.939 { 00:14:20.939 "name": "spare", 00:14:20.939 "uuid": "c0aae782-842f-5e42-8441-ce089ca88722", 00:14:20.939 "is_configured": true, 00:14:20.939 "data_offset": 2048, 00:14:20.939 "data_size": 63488 00:14:20.939 }, 00:14:20.939 { 00:14:20.939 "name": null, 00:14:20.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.939 "is_configured": false, 00:14:20.939 "data_offset": 0, 00:14:20.939 "data_size": 63488 00:14:20.939 }, 00:14:20.939 { 00:14:20.939 "name": "BaseBdev3", 00:14:20.939 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:20.939 "is_configured": true, 00:14:20.939 "data_offset": 2048, 00:14:20.939 "data_size": 63488 00:14:20.939 }, 00:14:20.939 { 00:14:20.939 "name": "BaseBdev4", 00:14:20.939 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:20.939 "is_configured": true, 00:14:20.939 "data_offset": 2048, 00:14:20.939 "data_size": 63488 00:14:20.939 } 00:14:20.939 ] 00:14:20.939 }' 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.939 [2024-11-27 11:52:47.250896] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=502 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.939 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.227 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.227 "name": "raid_bdev1", 00:14:21.227 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:21.228 "strip_size_kb": 0, 00:14:21.228 "state": "online", 00:14:21.228 "raid_level": "raid1", 00:14:21.228 "superblock": true, 00:14:21.228 "num_base_bdevs": 4, 00:14:21.228 "num_base_bdevs_discovered": 3, 00:14:21.228 "num_base_bdevs_operational": 3, 00:14:21.228 "process": { 00:14:21.228 "type": "rebuild", 00:14:21.228 "target": "spare", 00:14:21.228 "progress": { 00:14:21.228 "blocks": 16384, 00:14:21.228 "percent": 25 00:14:21.228 } 00:14:21.228 }, 00:14:21.228 "base_bdevs_list": [ 00:14:21.228 { 00:14:21.228 "name": "spare", 00:14:21.228 "uuid": "c0aae782-842f-5e42-8441-ce089ca88722", 00:14:21.228 "is_configured": true, 00:14:21.228 "data_offset": 2048, 00:14:21.228 "data_size": 63488 00:14:21.228 }, 00:14:21.228 { 00:14:21.228 "name": null, 00:14:21.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.228 "is_configured": false, 00:14:21.228 "data_offset": 0, 00:14:21.228 "data_size": 63488 00:14:21.228 }, 00:14:21.228 { 00:14:21.228 "name": "BaseBdev3", 00:14:21.228 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:21.228 "is_configured": true, 00:14:21.228 "data_offset": 2048, 00:14:21.228 "data_size": 63488 00:14:21.228 }, 00:14:21.228 { 00:14:21.228 "name": "BaseBdev4", 00:14:21.228 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:21.228 "is_configured": true, 00:14:21.228 "data_offset": 2048, 00:14:21.228 "data_size": 63488 00:14:21.228 } 00:14:21.228 ] 00:14:21.228 }' 00:14:21.228 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.228 119.25 IOPS, 357.75 MiB/s [2024-11-27T11:52:47.613Z] 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.228 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.228 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.228 11:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:21.228 [2024-11-27 11:52:47.582143] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:21.815 [2024-11-27 11:52:47.961001] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:21.815 [2024-11-27 11:52:48.186596] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:21.815 [2024-11-27 11:52:48.187055] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:22.075 110.20 IOPS, 330.60 MiB/s [2024-11-27T11:52:48.460Z] [2024-11-27 11:52:48.418994] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:22.075 11:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.075 11:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.075 11:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.075 11:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.075 11:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.075 11:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.075 11:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.075 11:52:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.075 11:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.075 11:52:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.075 11:52:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.334 11:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.334 "name": "raid_bdev1", 00:14:22.334 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:22.334 "strip_size_kb": 0, 00:14:22.334 "state": "online", 00:14:22.334 "raid_level": "raid1", 00:14:22.334 "superblock": true, 00:14:22.334 "num_base_bdevs": 4, 00:14:22.334 "num_base_bdevs_discovered": 3, 00:14:22.334 "num_base_bdevs_operational": 3, 00:14:22.334 "process": { 00:14:22.334 "type": "rebuild", 00:14:22.334 "target": "spare", 00:14:22.334 "progress": { 00:14:22.334 "blocks": 32768, 00:14:22.334 "percent": 51 00:14:22.334 } 00:14:22.334 }, 00:14:22.334 "base_bdevs_list": [ 00:14:22.334 { 00:14:22.334 "name": "spare", 00:14:22.334 "uuid": "c0aae782-842f-5e42-8441-ce089ca88722", 00:14:22.334 "is_configured": true, 00:14:22.334 "data_offset": 2048, 00:14:22.334 "data_size": 63488 00:14:22.334 }, 00:14:22.334 { 00:14:22.334 "name": null, 00:14:22.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.334 "is_configured": false, 00:14:22.334 "data_offset": 0, 00:14:22.334 "data_size": 63488 00:14:22.334 }, 00:14:22.334 { 00:14:22.334 "name": "BaseBdev3", 00:14:22.334 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:22.334 "is_configured": true, 00:14:22.334 "data_offset": 2048, 00:14:22.334 "data_size": 63488 00:14:22.334 }, 00:14:22.334 { 00:14:22.334 "name": "BaseBdev4", 00:14:22.334 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:22.334 "is_configured": true, 00:14:22.334 "data_offset": 2048, 00:14:22.334 "data_size": 63488 00:14:22.334 } 00:14:22.334 ] 00:14:22.334 }' 00:14:22.334 11:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.334 11:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.334 11:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.334 11:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.334 11:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.334 [2024-11-27 11:52:48.633619] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:22.334 [2024-11-27 11:52:48.634202] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:22.901 [2024-11-27 11:52:49.078226] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:23.159 99.33 IOPS, 298.00 MiB/s [2024-11-27T11:52:49.544Z] [2024-11-27 11:52:49.519718] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:23.418 11:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.418 11:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.418 11:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.418 11:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.418 11:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.418 11:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.418 11:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.418 11:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.418 11:52:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.418 11:52:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.418 11:52:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.418 11:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.418 "name": "raid_bdev1", 00:14:23.418 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:23.418 "strip_size_kb": 0, 00:14:23.418 "state": "online", 00:14:23.418 "raid_level": "raid1", 00:14:23.418 "superblock": true, 00:14:23.418 "num_base_bdevs": 4, 00:14:23.418 "num_base_bdevs_discovered": 3, 00:14:23.418 "num_base_bdevs_operational": 3, 00:14:23.418 "process": { 00:14:23.418 "type": "rebuild", 00:14:23.418 "target": "spare", 00:14:23.418 "progress": { 00:14:23.418 "blocks": 47104, 00:14:23.418 "percent": 74 00:14:23.418 } 00:14:23.418 }, 00:14:23.418 "base_bdevs_list": [ 00:14:23.418 { 00:14:23.418 "name": "spare", 00:14:23.418 "uuid": "c0aae782-842f-5e42-8441-ce089ca88722", 00:14:23.418 "is_configured": true, 00:14:23.418 "data_offset": 2048, 00:14:23.418 "data_size": 63488 00:14:23.418 }, 00:14:23.418 { 00:14:23.418 "name": null, 00:14:23.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.418 "is_configured": false, 00:14:23.418 "data_offset": 0, 00:14:23.418 "data_size": 63488 00:14:23.418 }, 00:14:23.418 { 00:14:23.418 "name": "BaseBdev3", 00:14:23.418 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:23.418 "is_configured": true, 00:14:23.418 "data_offset": 2048, 00:14:23.418 "data_size": 63488 00:14:23.418 }, 00:14:23.418 { 00:14:23.418 "name": "BaseBdev4", 00:14:23.418 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:23.418 "is_configured": true, 00:14:23.418 "data_offset": 2048, 00:14:23.418 "data_size": 63488 00:14:23.418 } 00:14:23.418 ] 00:14:23.418 }' 00:14:23.418 11:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.418 11:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.418 11:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.418 11:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.418 11:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:24.243 89.86 IOPS, 269.57 MiB/s [2024-11-27T11:52:50.628Z] [2024-11-27 11:52:50.383487] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:24.243 [2024-11-27 11:52:50.490322] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:24.243 [2024-11-27 11:52:50.495023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.502 "name": "raid_bdev1", 00:14:24.502 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:24.502 "strip_size_kb": 0, 00:14:24.502 "state": "online", 00:14:24.502 "raid_level": "raid1", 00:14:24.502 "superblock": true, 00:14:24.502 "num_base_bdevs": 4, 00:14:24.502 "num_base_bdevs_discovered": 3, 00:14:24.502 "num_base_bdevs_operational": 3, 00:14:24.502 "base_bdevs_list": [ 00:14:24.502 { 00:14:24.502 "name": "spare", 00:14:24.502 "uuid": "c0aae782-842f-5e42-8441-ce089ca88722", 00:14:24.502 "is_configured": true, 00:14:24.502 "data_offset": 2048, 00:14:24.502 "data_size": 63488 00:14:24.502 }, 00:14:24.502 { 00:14:24.502 "name": null, 00:14:24.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.502 "is_configured": false, 00:14:24.502 "data_offset": 0, 00:14:24.502 "data_size": 63488 00:14:24.502 }, 00:14:24.502 { 00:14:24.502 "name": "BaseBdev3", 00:14:24.502 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:24.502 "is_configured": true, 00:14:24.502 "data_offset": 2048, 00:14:24.502 "data_size": 63488 00:14:24.502 }, 00:14:24.502 { 00:14:24.502 "name": "BaseBdev4", 00:14:24.502 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:24.502 "is_configured": true, 00:14:24.502 "data_offset": 2048, 00:14:24.502 "data_size": 63488 00:14:24.502 } 00:14:24.502 ] 00:14:24.502 }' 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.502 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.760 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.760 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.760 "name": "raid_bdev1", 00:14:24.760 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:24.760 "strip_size_kb": 0, 00:14:24.760 "state": "online", 00:14:24.760 "raid_level": "raid1", 00:14:24.760 "superblock": true, 00:14:24.760 "num_base_bdevs": 4, 00:14:24.760 "num_base_bdevs_discovered": 3, 00:14:24.760 "num_base_bdevs_operational": 3, 00:14:24.760 "base_bdevs_list": [ 00:14:24.760 { 00:14:24.760 "name": "spare", 00:14:24.760 "uuid": "c0aae782-842f-5e42-8441-ce089ca88722", 00:14:24.760 "is_configured": true, 00:14:24.760 "data_offset": 2048, 00:14:24.760 "data_size": 63488 00:14:24.760 }, 00:14:24.760 { 00:14:24.760 "name": null, 00:14:24.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.760 "is_configured": false, 00:14:24.760 "data_offset": 0, 00:14:24.760 "data_size": 63488 00:14:24.760 }, 00:14:24.760 { 00:14:24.760 "name": "BaseBdev3", 00:14:24.760 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:24.760 "is_configured": true, 00:14:24.760 "data_offset": 2048, 00:14:24.760 "data_size": 63488 00:14:24.760 }, 00:14:24.760 { 00:14:24.760 "name": "BaseBdev4", 00:14:24.760 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:24.760 "is_configured": true, 00:14:24.760 "data_offset": 2048, 00:14:24.760 "data_size": 63488 00:14:24.760 } 00:14:24.760 ] 00:14:24.760 }' 00:14:24.760 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.760 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:24.760 11:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.760 "name": "raid_bdev1", 00:14:24.760 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:24.760 "strip_size_kb": 0, 00:14:24.760 "state": "online", 00:14:24.760 "raid_level": "raid1", 00:14:24.760 "superblock": true, 00:14:24.760 "num_base_bdevs": 4, 00:14:24.760 "num_base_bdevs_discovered": 3, 00:14:24.760 "num_base_bdevs_operational": 3, 00:14:24.760 "base_bdevs_list": [ 00:14:24.760 { 00:14:24.760 "name": "spare", 00:14:24.760 "uuid": "c0aae782-842f-5e42-8441-ce089ca88722", 00:14:24.760 "is_configured": true, 00:14:24.760 "data_offset": 2048, 00:14:24.760 "data_size": 63488 00:14:24.760 }, 00:14:24.760 { 00:14:24.760 "name": null, 00:14:24.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.760 "is_configured": false, 00:14:24.760 "data_offset": 0, 00:14:24.760 "data_size": 63488 00:14:24.760 }, 00:14:24.760 { 00:14:24.760 "name": "BaseBdev3", 00:14:24.760 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:24.760 "is_configured": true, 00:14:24.760 "data_offset": 2048, 00:14:24.760 "data_size": 63488 00:14:24.760 }, 00:14:24.760 { 00:14:24.760 "name": "BaseBdev4", 00:14:24.760 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:24.760 "is_configured": true, 00:14:24.760 "data_offset": 2048, 00:14:24.760 "data_size": 63488 00:14:24.760 } 00:14:24.760 ] 00:14:24.760 }' 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.760 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.279 81.75 IOPS, 245.25 MiB/s [2024-11-27T11:52:51.664Z] 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.279 [2024-11-27 11:52:51.456986] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:25.279 [2024-11-27 11:52:51.457102] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.279 00:14:25.279 Latency(us) 00:14:25.279 [2024-11-27T11:52:51.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.279 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:25.279 raid_bdev1 : 8.16 80.61 241.84 0.00 0.00 16916.71 336.27 112183.90 00:14:25.279 [2024-11-27T11:52:51.664Z] =================================================================================================================== 00:14:25.279 [2024-11-27T11:52:51.664Z] Total : 80.61 241.84 0.00 0.00 16916.71 336.27 112183.90 00:14:25.279 { 00:14:25.279 "results": [ 00:14:25.279 { 00:14:25.279 "job": "raid_bdev1", 00:14:25.279 "core_mask": "0x1", 00:14:25.279 "workload": "randrw", 00:14:25.279 "percentage": 50, 00:14:25.279 "status": "finished", 00:14:25.279 "queue_depth": 2, 00:14:25.279 "io_size": 3145728, 00:14:25.279 "runtime": 8.162331, 00:14:25.279 "iops": 80.61422649975846, 00:14:25.279 "mibps": 241.8426794992754, 00:14:25.279 "io_failed": 0, 00:14:25.279 "io_timeout": 0, 00:14:25.279 "avg_latency_us": 16916.705347685856, 00:14:25.279 "min_latency_us": 336.2655021834061, 00:14:25.279 "max_latency_us": 112183.89519650655 00:14:25.279 } 00:14:25.279 ], 00:14:25.279 "core_count": 1 00:14:25.279 } 00:14:25.279 [2024-11-27 11:52:51.512989] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.279 [2024-11-27 11:52:51.513069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.279 [2024-11-27 11:52:51.513191] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.279 [2024-11-27 11:52:51.513204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.279 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:25.537 /dev/nbd0 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.537 1+0 records in 00:14:25.537 1+0 records out 00:14:25.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555183 s, 7.4 MB/s 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.537 11:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:25.794 /dev/nbd1 00:14:25.794 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:25.794 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:25.794 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:25.794 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:25.794 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:25.794 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:25.794 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:25.794 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:25.794 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:25.794 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:25.794 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.794 1+0 records in 00:14:25.794 1+0 records out 00:14:25.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555748 s, 7.4 MB/s 00:14:25.794 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.794 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:25.794 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.052 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:26.052 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:26.052 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.053 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:26.053 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:26.053 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:26.053 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.053 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:26.053 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:26.053 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:26.053 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.053 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:26.311 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:26.569 /dev/nbd1 00:14:26.569 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:26.569 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:26.569 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:26.569 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:26.569 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:26.569 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:26.569 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:26.828 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:26.828 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:26.828 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:26.828 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.828 1+0 records in 00:14:26.828 1+0 records out 00:14:26.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408926 s, 10.0 MB/s 00:14:26.828 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.828 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:26.828 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.828 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:26.828 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:26.828 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.828 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:26.828 11:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:26.828 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:26.828 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.828 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:26.828 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:26.828 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:26.828 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.828 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:27.088 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:27.088 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:27.088 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:27.088 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:27.088 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:27.088 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:27.088 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:27.088 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:27.088 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:27.088 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:27.088 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:27.088 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:27.088 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:27.088 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:27.088 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:27.348 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:27.348 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:27.348 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:27.348 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:27.348 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:27.348 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:27.348 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:27.348 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:27.348 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:27.348 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:27.348 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.348 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.348 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.348 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:27.348 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.348 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.348 [2024-11-27 11:52:53.618377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:27.348 [2024-11-27 11:52:53.618449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.348 [2024-11-27 11:52:53.618479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:27.348 [2024-11-27 11:52:53.618500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.348 [2024-11-27 11:52:53.621095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.348 [2024-11-27 11:52:53.621204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:27.348 [2024-11-27 11:52:53.621334] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:27.348 [2024-11-27 11:52:53.621413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:27.348 [2024-11-27 11:52:53.621595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:27.348 [2024-11-27 11:52:53.621722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:27.348 spare 00:14:27.348 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.349 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:27.349 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.349 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.349 [2024-11-27 11:52:53.721669] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:27.349 [2024-11-27 11:52:53.721724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:27.349 [2024-11-27 11:52:53.722148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:27.349 [2024-11-27 11:52:53.722396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:27.349 [2024-11-27 11:52:53.722420] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:27.349 [2024-11-27 11:52:53.722651] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.349 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.349 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:27.349 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.349 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.349 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.349 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.349 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.349 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.349 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.349 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.349 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.608 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.608 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.608 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.608 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.608 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.608 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.608 "name": "raid_bdev1", 00:14:27.608 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:27.608 "strip_size_kb": 0, 00:14:27.608 "state": "online", 00:14:27.608 "raid_level": "raid1", 00:14:27.608 "superblock": true, 00:14:27.608 "num_base_bdevs": 4, 00:14:27.608 "num_base_bdevs_discovered": 3, 00:14:27.608 "num_base_bdevs_operational": 3, 00:14:27.608 "base_bdevs_list": [ 00:14:27.608 { 00:14:27.608 "name": "spare", 00:14:27.608 "uuid": "c0aae782-842f-5e42-8441-ce089ca88722", 00:14:27.608 "is_configured": true, 00:14:27.608 "data_offset": 2048, 00:14:27.608 "data_size": 63488 00:14:27.608 }, 00:14:27.608 { 00:14:27.608 "name": null, 00:14:27.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.608 "is_configured": false, 00:14:27.608 "data_offset": 2048, 00:14:27.608 "data_size": 63488 00:14:27.608 }, 00:14:27.608 { 00:14:27.608 "name": "BaseBdev3", 00:14:27.608 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:27.608 "is_configured": true, 00:14:27.608 "data_offset": 2048, 00:14:27.608 "data_size": 63488 00:14:27.608 }, 00:14:27.608 { 00:14:27.608 "name": "BaseBdev4", 00:14:27.608 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:27.608 "is_configured": true, 00:14:27.608 "data_offset": 2048, 00:14:27.608 "data_size": 63488 00:14:27.608 } 00:14:27.608 ] 00:14:27.608 }' 00:14:27.608 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.608 11:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.939 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.939 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.939 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.939 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.939 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.939 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.939 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.939 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.939 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.939 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.939 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.939 "name": "raid_bdev1", 00:14:27.939 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:27.939 "strip_size_kb": 0, 00:14:27.939 "state": "online", 00:14:27.939 "raid_level": "raid1", 00:14:27.939 "superblock": true, 00:14:27.939 "num_base_bdevs": 4, 00:14:27.939 "num_base_bdevs_discovered": 3, 00:14:27.939 "num_base_bdevs_operational": 3, 00:14:27.939 "base_bdevs_list": [ 00:14:27.939 { 00:14:27.940 "name": "spare", 00:14:27.940 "uuid": "c0aae782-842f-5e42-8441-ce089ca88722", 00:14:27.940 "is_configured": true, 00:14:27.940 "data_offset": 2048, 00:14:27.940 "data_size": 63488 00:14:27.940 }, 00:14:27.940 { 00:14:27.940 "name": null, 00:14:27.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.940 "is_configured": false, 00:14:27.940 "data_offset": 2048, 00:14:27.940 "data_size": 63488 00:14:27.940 }, 00:14:27.940 { 00:14:27.940 "name": "BaseBdev3", 00:14:27.940 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:27.940 "is_configured": true, 00:14:27.940 "data_offset": 2048, 00:14:27.940 "data_size": 63488 00:14:27.940 }, 00:14:27.940 { 00:14:27.940 "name": "BaseBdev4", 00:14:27.940 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:27.940 "is_configured": true, 00:14:27.940 "data_offset": 2048, 00:14:27.940 "data_size": 63488 00:14:27.940 } 00:14:27.940 ] 00:14:27.940 }' 00:14:27.940 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.223 [2024-11-27 11:52:54.433596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.223 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.223 "name": "raid_bdev1", 00:14:28.223 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:28.223 "strip_size_kb": 0, 00:14:28.223 "state": "online", 00:14:28.223 "raid_level": "raid1", 00:14:28.223 "superblock": true, 00:14:28.223 "num_base_bdevs": 4, 00:14:28.223 "num_base_bdevs_discovered": 2, 00:14:28.223 "num_base_bdevs_operational": 2, 00:14:28.223 "base_bdevs_list": [ 00:14:28.223 { 00:14:28.223 "name": null, 00:14:28.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.223 "is_configured": false, 00:14:28.223 "data_offset": 0, 00:14:28.223 "data_size": 63488 00:14:28.223 }, 00:14:28.223 { 00:14:28.223 "name": null, 00:14:28.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.223 "is_configured": false, 00:14:28.223 "data_offset": 2048, 00:14:28.223 "data_size": 63488 00:14:28.223 }, 00:14:28.223 { 00:14:28.224 "name": "BaseBdev3", 00:14:28.224 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:28.224 "is_configured": true, 00:14:28.224 "data_offset": 2048, 00:14:28.224 "data_size": 63488 00:14:28.224 }, 00:14:28.224 { 00:14:28.224 "name": "BaseBdev4", 00:14:28.224 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:28.224 "is_configured": true, 00:14:28.224 "data_offset": 2048, 00:14:28.224 "data_size": 63488 00:14:28.224 } 00:14:28.224 ] 00:14:28.224 }' 00:14:28.224 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.224 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.792 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:28.792 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.792 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.792 [2024-11-27 11:52:54.932877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.792 [2024-11-27 11:52:54.933100] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:28.792 [2024-11-27 11:52:54.933117] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:28.792 [2024-11-27 11:52:54.933167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.792 [2024-11-27 11:52:54.951251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:28.792 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.792 11:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:28.792 [2024-11-27 11:52:54.953446] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:29.728 11:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.728 11:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.728 11:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.728 11:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.728 11:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.728 11:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.728 11:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.728 11:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.728 11:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.728 11:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.728 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.728 "name": "raid_bdev1", 00:14:29.728 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:29.728 "strip_size_kb": 0, 00:14:29.728 "state": "online", 00:14:29.728 "raid_level": "raid1", 00:14:29.728 "superblock": true, 00:14:29.728 "num_base_bdevs": 4, 00:14:29.728 "num_base_bdevs_discovered": 3, 00:14:29.728 "num_base_bdevs_operational": 3, 00:14:29.728 "process": { 00:14:29.728 "type": "rebuild", 00:14:29.728 "target": "spare", 00:14:29.728 "progress": { 00:14:29.728 "blocks": 20480, 00:14:29.728 "percent": 32 00:14:29.728 } 00:14:29.728 }, 00:14:29.728 "base_bdevs_list": [ 00:14:29.728 { 00:14:29.728 "name": "spare", 00:14:29.728 "uuid": "c0aae782-842f-5e42-8441-ce089ca88722", 00:14:29.728 "is_configured": true, 00:14:29.728 "data_offset": 2048, 00:14:29.728 "data_size": 63488 00:14:29.728 }, 00:14:29.728 { 00:14:29.728 "name": null, 00:14:29.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.728 "is_configured": false, 00:14:29.728 "data_offset": 2048, 00:14:29.728 "data_size": 63488 00:14:29.728 }, 00:14:29.728 { 00:14:29.729 "name": "BaseBdev3", 00:14:29.729 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:29.729 "is_configured": true, 00:14:29.729 "data_offset": 2048, 00:14:29.729 "data_size": 63488 00:14:29.729 }, 00:14:29.729 { 00:14:29.729 "name": "BaseBdev4", 00:14:29.729 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:29.729 "is_configured": true, 00:14:29.729 "data_offset": 2048, 00:14:29.729 "data_size": 63488 00:14:29.729 } 00:14:29.729 ] 00:14:29.729 }' 00:14:29.729 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.729 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.729 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.987 [2024-11-27 11:52:56.121198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.987 [2024-11-27 11:52:56.159519] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:29.987 [2024-11-27 11:52:56.159610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.987 [2024-11-27 11:52:56.159636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.987 [2024-11-27 11:52:56.159645] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.987 "name": "raid_bdev1", 00:14:29.987 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:29.987 "strip_size_kb": 0, 00:14:29.987 "state": "online", 00:14:29.987 "raid_level": "raid1", 00:14:29.987 "superblock": true, 00:14:29.987 "num_base_bdevs": 4, 00:14:29.987 "num_base_bdevs_discovered": 2, 00:14:29.987 "num_base_bdevs_operational": 2, 00:14:29.987 "base_bdevs_list": [ 00:14:29.987 { 00:14:29.987 "name": null, 00:14:29.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.987 "is_configured": false, 00:14:29.987 "data_offset": 0, 00:14:29.987 "data_size": 63488 00:14:29.987 }, 00:14:29.987 { 00:14:29.987 "name": null, 00:14:29.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.987 "is_configured": false, 00:14:29.987 "data_offset": 2048, 00:14:29.987 "data_size": 63488 00:14:29.987 }, 00:14:29.987 { 00:14:29.987 "name": "BaseBdev3", 00:14:29.987 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:29.987 "is_configured": true, 00:14:29.987 "data_offset": 2048, 00:14:29.987 "data_size": 63488 00:14:29.987 }, 00:14:29.987 { 00:14:29.987 "name": "BaseBdev4", 00:14:29.987 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:29.987 "is_configured": true, 00:14:29.987 "data_offset": 2048, 00:14:29.987 "data_size": 63488 00:14:29.987 } 00:14:29.987 ] 00:14:29.987 }' 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.987 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.555 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:30.555 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.555 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.555 [2024-11-27 11:52:56.673211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:30.555 [2024-11-27 11:52:56.673336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.555 [2024-11-27 11:52:56.673408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:30.555 [2024-11-27 11:52:56.673448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.555 [2024-11-27 11:52:56.674033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.555 [2024-11-27 11:52:56.674099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:30.555 [2024-11-27 11:52:56.674222] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:30.555 [2024-11-27 11:52:56.674238] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:30.555 [2024-11-27 11:52:56.674252] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:30.555 [2024-11-27 11:52:56.674276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:30.555 [2024-11-27 11:52:56.692238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:30.555 spare 00:14:30.555 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.555 11:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:30.555 [2024-11-27 11:52:56.694422] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:31.491 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.491 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.491 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.491 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.491 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.491 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.491 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.491 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.491 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.491 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.491 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.491 "name": "raid_bdev1", 00:14:31.492 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:31.492 "strip_size_kb": 0, 00:14:31.492 "state": "online", 00:14:31.492 "raid_level": "raid1", 00:14:31.492 "superblock": true, 00:14:31.492 "num_base_bdevs": 4, 00:14:31.492 "num_base_bdevs_discovered": 3, 00:14:31.492 "num_base_bdevs_operational": 3, 00:14:31.492 "process": { 00:14:31.492 "type": "rebuild", 00:14:31.492 "target": "spare", 00:14:31.492 "progress": { 00:14:31.492 "blocks": 20480, 00:14:31.492 "percent": 32 00:14:31.492 } 00:14:31.492 }, 00:14:31.492 "base_bdevs_list": [ 00:14:31.492 { 00:14:31.492 "name": "spare", 00:14:31.492 "uuid": "c0aae782-842f-5e42-8441-ce089ca88722", 00:14:31.492 "is_configured": true, 00:14:31.492 "data_offset": 2048, 00:14:31.492 "data_size": 63488 00:14:31.492 }, 00:14:31.492 { 00:14:31.492 "name": null, 00:14:31.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.492 "is_configured": false, 00:14:31.492 "data_offset": 2048, 00:14:31.492 "data_size": 63488 00:14:31.492 }, 00:14:31.492 { 00:14:31.492 "name": "BaseBdev3", 00:14:31.492 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:31.492 "is_configured": true, 00:14:31.492 "data_offset": 2048, 00:14:31.492 "data_size": 63488 00:14:31.492 }, 00:14:31.492 { 00:14:31.492 "name": "BaseBdev4", 00:14:31.492 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:31.492 "is_configured": true, 00:14:31.492 "data_offset": 2048, 00:14:31.492 "data_size": 63488 00:14:31.492 } 00:14:31.492 ] 00:14:31.492 }' 00:14:31.492 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.492 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.492 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.492 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.492 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:31.492 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.492 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.492 [2024-11-27 11:52:57.850246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.750 [2024-11-27 11:52:57.900561] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:31.750 [2024-11-27 11:52:57.900640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.750 [2024-11-27 11:52:57.900659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.750 [2024-11-27 11:52:57.900670] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:31.750 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.750 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:31.750 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.750 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.750 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.750 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.751 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.751 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.751 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.751 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.751 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.751 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.751 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.751 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.751 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.751 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.751 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.751 "name": "raid_bdev1", 00:14:31.751 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:31.751 "strip_size_kb": 0, 00:14:31.751 "state": "online", 00:14:31.751 "raid_level": "raid1", 00:14:31.751 "superblock": true, 00:14:31.751 "num_base_bdevs": 4, 00:14:31.751 "num_base_bdevs_discovered": 2, 00:14:31.751 "num_base_bdevs_operational": 2, 00:14:31.751 "base_bdevs_list": [ 00:14:31.751 { 00:14:31.751 "name": null, 00:14:31.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.751 "is_configured": false, 00:14:31.751 "data_offset": 0, 00:14:31.751 "data_size": 63488 00:14:31.751 }, 00:14:31.751 { 00:14:31.751 "name": null, 00:14:31.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.751 "is_configured": false, 00:14:31.751 "data_offset": 2048, 00:14:31.751 "data_size": 63488 00:14:31.751 }, 00:14:31.751 { 00:14:31.751 "name": "BaseBdev3", 00:14:31.751 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:31.751 "is_configured": true, 00:14:31.751 "data_offset": 2048, 00:14:31.751 "data_size": 63488 00:14:31.751 }, 00:14:31.751 { 00:14:31.751 "name": "BaseBdev4", 00:14:31.751 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:31.751 "is_configured": true, 00:14:31.751 "data_offset": 2048, 00:14:31.751 "data_size": 63488 00:14:31.751 } 00:14:31.751 ] 00:14:31.751 }' 00:14:31.751 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.751 11:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.318 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:32.318 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.318 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:32.318 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:32.318 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.318 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.318 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.318 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.318 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.318 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.318 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.318 "name": "raid_bdev1", 00:14:32.318 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:32.318 "strip_size_kb": 0, 00:14:32.318 "state": "online", 00:14:32.318 "raid_level": "raid1", 00:14:32.319 "superblock": true, 00:14:32.319 "num_base_bdevs": 4, 00:14:32.319 "num_base_bdevs_discovered": 2, 00:14:32.319 "num_base_bdevs_operational": 2, 00:14:32.319 "base_bdevs_list": [ 00:14:32.319 { 00:14:32.319 "name": null, 00:14:32.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.319 "is_configured": false, 00:14:32.319 "data_offset": 0, 00:14:32.319 "data_size": 63488 00:14:32.319 }, 00:14:32.319 { 00:14:32.319 "name": null, 00:14:32.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.319 "is_configured": false, 00:14:32.319 "data_offset": 2048, 00:14:32.319 "data_size": 63488 00:14:32.319 }, 00:14:32.319 { 00:14:32.319 "name": "BaseBdev3", 00:14:32.319 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:32.319 "is_configured": true, 00:14:32.319 "data_offset": 2048, 00:14:32.319 "data_size": 63488 00:14:32.319 }, 00:14:32.319 { 00:14:32.319 "name": "BaseBdev4", 00:14:32.319 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:32.319 "is_configured": true, 00:14:32.319 "data_offset": 2048, 00:14:32.319 "data_size": 63488 00:14:32.319 } 00:14:32.319 ] 00:14:32.319 }' 00:14:32.319 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.319 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.319 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.319 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.319 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:32.319 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.319 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.319 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.319 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:32.319 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.319 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.319 [2024-11-27 11:52:58.575168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:32.319 [2024-11-27 11:52:58.575287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.319 [2024-11-27 11:52:58.575341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:32.319 [2024-11-27 11:52:58.575378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.319 [2024-11-27 11:52:58.575970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.319 [2024-11-27 11:52:58.576041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:32.319 [2024-11-27 11:52:58.576166] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:32.319 [2024-11-27 11:52:58.576220] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:32.319 [2024-11-27 11:52:58.576265] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:32.319 [2024-11-27 11:52:58.576346] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:32.319 BaseBdev1 00:14:32.319 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.319 11:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:33.256 11:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:33.256 11:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.256 11:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.256 11:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.256 11:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.256 11:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.256 11:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.256 11:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.256 11:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.256 11:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.256 11:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.256 11:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.256 11:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.256 11:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.256 11:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.515 11:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.515 "name": "raid_bdev1", 00:14:33.515 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:33.515 "strip_size_kb": 0, 00:14:33.515 "state": "online", 00:14:33.515 "raid_level": "raid1", 00:14:33.515 "superblock": true, 00:14:33.515 "num_base_bdevs": 4, 00:14:33.515 "num_base_bdevs_discovered": 2, 00:14:33.515 "num_base_bdevs_operational": 2, 00:14:33.515 "base_bdevs_list": [ 00:14:33.515 { 00:14:33.515 "name": null, 00:14:33.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.515 "is_configured": false, 00:14:33.515 "data_offset": 0, 00:14:33.515 "data_size": 63488 00:14:33.515 }, 00:14:33.515 { 00:14:33.515 "name": null, 00:14:33.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.515 "is_configured": false, 00:14:33.515 "data_offset": 2048, 00:14:33.515 "data_size": 63488 00:14:33.515 }, 00:14:33.515 { 00:14:33.515 "name": "BaseBdev3", 00:14:33.515 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:33.515 "is_configured": true, 00:14:33.515 "data_offset": 2048, 00:14:33.515 "data_size": 63488 00:14:33.515 }, 00:14:33.515 { 00:14:33.515 "name": "BaseBdev4", 00:14:33.515 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:33.515 "is_configured": true, 00:14:33.515 "data_offset": 2048, 00:14:33.515 "data_size": 63488 00:14:33.515 } 00:14:33.515 ] 00:14:33.515 }' 00:14:33.515 11:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.515 11:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.773 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.773 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.773 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.773 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.773 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.773 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.773 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.773 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.773 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.773 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.773 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.773 "name": "raid_bdev1", 00:14:33.773 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:33.773 "strip_size_kb": 0, 00:14:33.773 "state": "online", 00:14:33.773 "raid_level": "raid1", 00:14:33.773 "superblock": true, 00:14:33.773 "num_base_bdevs": 4, 00:14:33.773 "num_base_bdevs_discovered": 2, 00:14:33.773 "num_base_bdevs_operational": 2, 00:14:33.773 "base_bdevs_list": [ 00:14:33.773 { 00:14:33.773 "name": null, 00:14:33.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.773 "is_configured": false, 00:14:33.773 "data_offset": 0, 00:14:33.773 "data_size": 63488 00:14:33.773 }, 00:14:33.773 { 00:14:33.773 "name": null, 00:14:33.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.773 "is_configured": false, 00:14:33.773 "data_offset": 2048, 00:14:33.773 "data_size": 63488 00:14:33.773 }, 00:14:33.773 { 00:14:33.773 "name": "BaseBdev3", 00:14:33.773 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:33.773 "is_configured": true, 00:14:33.773 "data_offset": 2048, 00:14:33.773 "data_size": 63488 00:14:33.773 }, 00:14:33.773 { 00:14:33.773 "name": "BaseBdev4", 00:14:33.773 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:33.773 "is_configured": true, 00:14:33.773 "data_offset": 2048, 00:14:33.773 "data_size": 63488 00:14:33.773 } 00:14:33.773 ] 00:14:33.773 }' 00:14:33.773 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.032 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:34.032 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.032 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:34.032 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:34.032 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:34.032 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:34.032 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:34.033 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:34.033 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:34.033 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:34.033 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:34.033 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.033 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.033 [2024-11-27 11:53:00.224823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.033 [2024-11-27 11:53:00.225073] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:34.033 [2024-11-27 11:53:00.225139] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:34.033 request: 00:14:34.033 { 00:14:34.033 "base_bdev": "BaseBdev1", 00:14:34.033 "raid_bdev": "raid_bdev1", 00:14:34.033 "method": "bdev_raid_add_base_bdev", 00:14:34.033 "req_id": 1 00:14:34.033 } 00:14:34.033 Got JSON-RPC error response 00:14:34.033 response: 00:14:34.033 { 00:14:34.033 "code": -22, 00:14:34.033 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:34.033 } 00:14:34.033 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:34.033 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:34.033 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:34.033 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:34.033 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:34.033 11:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:34.979 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:34.979 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.979 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.979 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.979 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.979 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.979 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.979 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.979 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.979 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.979 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.979 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.979 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.979 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.979 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.979 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.979 "name": "raid_bdev1", 00:14:34.979 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:34.979 "strip_size_kb": 0, 00:14:34.979 "state": "online", 00:14:34.979 "raid_level": "raid1", 00:14:34.979 "superblock": true, 00:14:34.979 "num_base_bdevs": 4, 00:14:34.979 "num_base_bdevs_discovered": 2, 00:14:34.979 "num_base_bdevs_operational": 2, 00:14:34.979 "base_bdevs_list": [ 00:14:34.979 { 00:14:34.979 "name": null, 00:14:34.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.979 "is_configured": false, 00:14:34.979 "data_offset": 0, 00:14:34.979 "data_size": 63488 00:14:34.979 }, 00:14:34.979 { 00:14:34.979 "name": null, 00:14:34.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.979 "is_configured": false, 00:14:34.979 "data_offset": 2048, 00:14:34.979 "data_size": 63488 00:14:34.979 }, 00:14:34.979 { 00:14:34.979 "name": "BaseBdev3", 00:14:34.979 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:34.979 "is_configured": true, 00:14:34.979 "data_offset": 2048, 00:14:34.979 "data_size": 63488 00:14:34.979 }, 00:14:34.979 { 00:14:34.979 "name": "BaseBdev4", 00:14:34.979 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:34.979 "is_configured": true, 00:14:34.979 "data_offset": 2048, 00:14:34.979 "data_size": 63488 00:14:34.979 } 00:14:34.979 ] 00:14:34.979 }' 00:14:34.979 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.979 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.548 "name": "raid_bdev1", 00:14:35.548 "uuid": "f15bec40-9c5d-4527-9a64-4040059270c7", 00:14:35.548 "strip_size_kb": 0, 00:14:35.548 "state": "online", 00:14:35.548 "raid_level": "raid1", 00:14:35.548 "superblock": true, 00:14:35.548 "num_base_bdevs": 4, 00:14:35.548 "num_base_bdevs_discovered": 2, 00:14:35.548 "num_base_bdevs_operational": 2, 00:14:35.548 "base_bdevs_list": [ 00:14:35.548 { 00:14:35.548 "name": null, 00:14:35.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.548 "is_configured": false, 00:14:35.548 "data_offset": 0, 00:14:35.548 "data_size": 63488 00:14:35.548 }, 00:14:35.548 { 00:14:35.548 "name": null, 00:14:35.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.548 "is_configured": false, 00:14:35.548 "data_offset": 2048, 00:14:35.548 "data_size": 63488 00:14:35.548 }, 00:14:35.548 { 00:14:35.548 "name": "BaseBdev3", 00:14:35.548 "uuid": "3ad71a3d-8f83-5112-9914-f49bc2d1caa1", 00:14:35.548 "is_configured": true, 00:14:35.548 "data_offset": 2048, 00:14:35.548 "data_size": 63488 00:14:35.548 }, 00:14:35.548 { 00:14:35.548 "name": "BaseBdev4", 00:14:35.548 "uuid": "77c9ed8d-e229-5913-bb07-0eb90dd65344", 00:14:35.548 "is_configured": true, 00:14:35.548 "data_offset": 2048, 00:14:35.548 "data_size": 63488 00:14:35.548 } 00:14:35.548 ] 00:14:35.548 }' 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79173 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79173 ']' 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79173 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79173 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:35.548 killing process with pid 79173 00:14:35.548 Received shutdown signal, test time was about 18.581070 seconds 00:14:35.548 00:14:35.548 Latency(us) 00:14:35.548 [2024-11-27T11:53:01.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.548 [2024-11-27T11:53:01.933Z] =================================================================================================================== 00:14:35.548 [2024-11-27T11:53:01.933Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79173' 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79173 00:14:35.548 [2024-11-27 11:53:01.883548] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:35.548 [2024-11-27 11:53:01.883692] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.548 [2024-11-27 11:53:01.883771] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.548 [2024-11-27 11:53:01.883783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:35.548 11:53:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79173 00:14:36.116 [2024-11-27 11:53:02.400181] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:37.506 11:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:37.506 00:14:37.506 real 0m22.383s 00:14:37.506 user 0m29.380s 00:14:37.506 sys 0m2.695s 00:14:37.506 11:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:37.506 11:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.506 ************************************ 00:14:37.506 END TEST raid_rebuild_test_sb_io 00:14:37.506 ************************************ 00:14:37.765 11:53:03 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:37.765 11:53:03 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:37.765 11:53:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:37.765 11:53:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:37.765 11:53:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:37.765 ************************************ 00:14:37.765 START TEST raid5f_state_function_test 00:14:37.765 ************************************ 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79908 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79908' 00:14:37.765 Process raid pid: 79908 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79908 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79908 ']' 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:37.765 11:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.765 [2024-11-27 11:53:04.028781] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:14:37.765 [2024-11-27 11:53:04.029037] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.023 [2024-11-27 11:53:04.208546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.023 [2024-11-27 11:53:04.346597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.282 [2024-11-27 11:53:04.594858] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:38.282 [2024-11-27 11:53:04.594974] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.850 [2024-11-27 11:53:04.933264] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:38.850 [2024-11-27 11:53:04.933328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:38.850 [2024-11-27 11:53:04.933341] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:38.850 [2024-11-27 11:53:04.933353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:38.850 [2024-11-27 11:53:04.933361] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:38.850 [2024-11-27 11:53:04.933371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.850 "name": "Existed_Raid", 00:14:38.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.850 "strip_size_kb": 64, 00:14:38.850 "state": "configuring", 00:14:38.850 "raid_level": "raid5f", 00:14:38.850 "superblock": false, 00:14:38.850 "num_base_bdevs": 3, 00:14:38.850 "num_base_bdevs_discovered": 0, 00:14:38.850 "num_base_bdevs_operational": 3, 00:14:38.850 "base_bdevs_list": [ 00:14:38.850 { 00:14:38.850 "name": "BaseBdev1", 00:14:38.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.850 "is_configured": false, 00:14:38.850 "data_offset": 0, 00:14:38.850 "data_size": 0 00:14:38.850 }, 00:14:38.850 { 00:14:38.850 "name": "BaseBdev2", 00:14:38.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.850 "is_configured": false, 00:14:38.850 "data_offset": 0, 00:14:38.850 "data_size": 0 00:14:38.850 }, 00:14:38.850 { 00:14:38.850 "name": "BaseBdev3", 00:14:38.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.850 "is_configured": false, 00:14:38.850 "data_offset": 0, 00:14:38.850 "data_size": 0 00:14:38.850 } 00:14:38.850 ] 00:14:38.850 }' 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.850 11:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.109 [2024-11-27 11:53:05.412581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:39.109 [2024-11-27 11:53:05.412687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.109 [2024-11-27 11:53:05.424564] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:39.109 [2024-11-27 11:53:05.424668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:39.109 [2024-11-27 11:53:05.424702] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:39.109 [2024-11-27 11:53:05.424731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:39.109 [2024-11-27 11:53:05.424753] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:39.109 [2024-11-27 11:53:05.424779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.109 [2024-11-27 11:53:05.480324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.109 BaseBdev1 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.109 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.370 [ 00:14:39.370 { 00:14:39.370 "name": "BaseBdev1", 00:14:39.370 "aliases": [ 00:14:39.370 "f55f3569-79df-4ce7-b35a-baa6402c152a" 00:14:39.370 ], 00:14:39.370 "product_name": "Malloc disk", 00:14:39.370 "block_size": 512, 00:14:39.370 "num_blocks": 65536, 00:14:39.370 "uuid": "f55f3569-79df-4ce7-b35a-baa6402c152a", 00:14:39.370 "assigned_rate_limits": { 00:14:39.370 "rw_ios_per_sec": 0, 00:14:39.370 "rw_mbytes_per_sec": 0, 00:14:39.370 "r_mbytes_per_sec": 0, 00:14:39.370 "w_mbytes_per_sec": 0 00:14:39.370 }, 00:14:39.370 "claimed": true, 00:14:39.370 "claim_type": "exclusive_write", 00:14:39.370 "zoned": false, 00:14:39.370 "supported_io_types": { 00:14:39.370 "read": true, 00:14:39.370 "write": true, 00:14:39.370 "unmap": true, 00:14:39.370 "flush": true, 00:14:39.370 "reset": true, 00:14:39.370 "nvme_admin": false, 00:14:39.370 "nvme_io": false, 00:14:39.370 "nvme_io_md": false, 00:14:39.370 "write_zeroes": true, 00:14:39.370 "zcopy": true, 00:14:39.370 "get_zone_info": false, 00:14:39.370 "zone_management": false, 00:14:39.370 "zone_append": false, 00:14:39.370 "compare": false, 00:14:39.370 "compare_and_write": false, 00:14:39.370 "abort": true, 00:14:39.370 "seek_hole": false, 00:14:39.370 "seek_data": false, 00:14:39.370 "copy": true, 00:14:39.370 "nvme_iov_md": false 00:14:39.370 }, 00:14:39.370 "memory_domains": [ 00:14:39.370 { 00:14:39.370 "dma_device_id": "system", 00:14:39.370 "dma_device_type": 1 00:14:39.370 }, 00:14:39.370 { 00:14:39.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.370 "dma_device_type": 2 00:14:39.370 } 00:14:39.370 ], 00:14:39.370 "driver_specific": {} 00:14:39.370 } 00:14:39.370 ] 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.370 "name": "Existed_Raid", 00:14:39.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.370 "strip_size_kb": 64, 00:14:39.370 "state": "configuring", 00:14:39.370 "raid_level": "raid5f", 00:14:39.370 "superblock": false, 00:14:39.370 "num_base_bdevs": 3, 00:14:39.370 "num_base_bdevs_discovered": 1, 00:14:39.370 "num_base_bdevs_operational": 3, 00:14:39.370 "base_bdevs_list": [ 00:14:39.370 { 00:14:39.370 "name": "BaseBdev1", 00:14:39.370 "uuid": "f55f3569-79df-4ce7-b35a-baa6402c152a", 00:14:39.370 "is_configured": true, 00:14:39.370 "data_offset": 0, 00:14:39.370 "data_size": 65536 00:14:39.370 }, 00:14:39.370 { 00:14:39.370 "name": "BaseBdev2", 00:14:39.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.370 "is_configured": false, 00:14:39.370 "data_offset": 0, 00:14:39.370 "data_size": 0 00:14:39.370 }, 00:14:39.370 { 00:14:39.370 "name": "BaseBdev3", 00:14:39.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.370 "is_configured": false, 00:14:39.370 "data_offset": 0, 00:14:39.370 "data_size": 0 00:14:39.370 } 00:14:39.370 ] 00:14:39.370 }' 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.370 11:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.938 [2024-11-27 11:53:06.031670] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:39.938 [2024-11-27 11:53:06.031811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.938 [2024-11-27 11:53:06.039679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.938 [2024-11-27 11:53:06.041669] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:39.938 [2024-11-27 11:53:06.041751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:39.938 [2024-11-27 11:53:06.041782] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:39.938 [2024-11-27 11:53:06.041805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.938 "name": "Existed_Raid", 00:14:39.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.938 "strip_size_kb": 64, 00:14:39.938 "state": "configuring", 00:14:39.938 "raid_level": "raid5f", 00:14:39.938 "superblock": false, 00:14:39.938 "num_base_bdevs": 3, 00:14:39.938 "num_base_bdevs_discovered": 1, 00:14:39.938 "num_base_bdevs_operational": 3, 00:14:39.938 "base_bdevs_list": [ 00:14:39.938 { 00:14:39.938 "name": "BaseBdev1", 00:14:39.938 "uuid": "f55f3569-79df-4ce7-b35a-baa6402c152a", 00:14:39.938 "is_configured": true, 00:14:39.938 "data_offset": 0, 00:14:39.938 "data_size": 65536 00:14:39.938 }, 00:14:39.938 { 00:14:39.938 "name": "BaseBdev2", 00:14:39.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.938 "is_configured": false, 00:14:39.938 "data_offset": 0, 00:14:39.938 "data_size": 0 00:14:39.938 }, 00:14:39.938 { 00:14:39.938 "name": "BaseBdev3", 00:14:39.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.938 "is_configured": false, 00:14:39.938 "data_offset": 0, 00:14:39.938 "data_size": 0 00:14:39.938 } 00:14:39.938 ] 00:14:39.938 }' 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.938 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.197 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:40.197 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.197 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.197 [2024-11-27 11:53:06.509037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.197 BaseBdev2 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.198 [ 00:14:40.198 { 00:14:40.198 "name": "BaseBdev2", 00:14:40.198 "aliases": [ 00:14:40.198 "96648e02-9084-4ea3-892b-8065585d4fcb" 00:14:40.198 ], 00:14:40.198 "product_name": "Malloc disk", 00:14:40.198 "block_size": 512, 00:14:40.198 "num_blocks": 65536, 00:14:40.198 "uuid": "96648e02-9084-4ea3-892b-8065585d4fcb", 00:14:40.198 "assigned_rate_limits": { 00:14:40.198 "rw_ios_per_sec": 0, 00:14:40.198 "rw_mbytes_per_sec": 0, 00:14:40.198 "r_mbytes_per_sec": 0, 00:14:40.198 "w_mbytes_per_sec": 0 00:14:40.198 }, 00:14:40.198 "claimed": true, 00:14:40.198 "claim_type": "exclusive_write", 00:14:40.198 "zoned": false, 00:14:40.198 "supported_io_types": { 00:14:40.198 "read": true, 00:14:40.198 "write": true, 00:14:40.198 "unmap": true, 00:14:40.198 "flush": true, 00:14:40.198 "reset": true, 00:14:40.198 "nvme_admin": false, 00:14:40.198 "nvme_io": false, 00:14:40.198 "nvme_io_md": false, 00:14:40.198 "write_zeroes": true, 00:14:40.198 "zcopy": true, 00:14:40.198 "get_zone_info": false, 00:14:40.198 "zone_management": false, 00:14:40.198 "zone_append": false, 00:14:40.198 "compare": false, 00:14:40.198 "compare_and_write": false, 00:14:40.198 "abort": true, 00:14:40.198 "seek_hole": false, 00:14:40.198 "seek_data": false, 00:14:40.198 "copy": true, 00:14:40.198 "nvme_iov_md": false 00:14:40.198 }, 00:14:40.198 "memory_domains": [ 00:14:40.198 { 00:14:40.198 "dma_device_id": "system", 00:14:40.198 "dma_device_type": 1 00:14:40.198 }, 00:14:40.198 { 00:14:40.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.198 "dma_device_type": 2 00:14:40.198 } 00:14:40.198 ], 00:14:40.198 "driver_specific": {} 00:14:40.198 } 00:14:40.198 ] 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.198 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.465 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.465 "name": "Existed_Raid", 00:14:40.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.465 "strip_size_kb": 64, 00:14:40.465 "state": "configuring", 00:14:40.465 "raid_level": "raid5f", 00:14:40.465 "superblock": false, 00:14:40.465 "num_base_bdevs": 3, 00:14:40.465 "num_base_bdevs_discovered": 2, 00:14:40.465 "num_base_bdevs_operational": 3, 00:14:40.465 "base_bdevs_list": [ 00:14:40.465 { 00:14:40.465 "name": "BaseBdev1", 00:14:40.465 "uuid": "f55f3569-79df-4ce7-b35a-baa6402c152a", 00:14:40.465 "is_configured": true, 00:14:40.465 "data_offset": 0, 00:14:40.465 "data_size": 65536 00:14:40.465 }, 00:14:40.465 { 00:14:40.465 "name": "BaseBdev2", 00:14:40.465 "uuid": "96648e02-9084-4ea3-892b-8065585d4fcb", 00:14:40.465 "is_configured": true, 00:14:40.465 "data_offset": 0, 00:14:40.465 "data_size": 65536 00:14:40.465 }, 00:14:40.465 { 00:14:40.465 "name": "BaseBdev3", 00:14:40.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.465 "is_configured": false, 00:14:40.465 "data_offset": 0, 00:14:40.465 "data_size": 0 00:14:40.465 } 00:14:40.465 ] 00:14:40.465 }' 00:14:40.465 11:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.465 11:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.739 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:40.739 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.739 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.999 [2024-11-27 11:53:07.137396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:40.999 [2024-11-27 11:53:07.137570] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:40.999 [2024-11-27 11:53:07.137612] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:40.999 [2024-11-27 11:53:07.137978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:40.999 [2024-11-27 11:53:07.145057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:40.999 [2024-11-27 11:53:07.145124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:40.999 [2024-11-27 11:53:07.145481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.999 BaseBdev3 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.999 [ 00:14:40.999 { 00:14:40.999 "name": "BaseBdev3", 00:14:40.999 "aliases": [ 00:14:40.999 "273b837e-89b3-4cec-b7f8-e63f9e286a98" 00:14:40.999 ], 00:14:40.999 "product_name": "Malloc disk", 00:14:40.999 "block_size": 512, 00:14:40.999 "num_blocks": 65536, 00:14:40.999 "uuid": "273b837e-89b3-4cec-b7f8-e63f9e286a98", 00:14:40.999 "assigned_rate_limits": { 00:14:40.999 "rw_ios_per_sec": 0, 00:14:40.999 "rw_mbytes_per_sec": 0, 00:14:40.999 "r_mbytes_per_sec": 0, 00:14:40.999 "w_mbytes_per_sec": 0 00:14:40.999 }, 00:14:40.999 "claimed": true, 00:14:40.999 "claim_type": "exclusive_write", 00:14:40.999 "zoned": false, 00:14:40.999 "supported_io_types": { 00:14:40.999 "read": true, 00:14:40.999 "write": true, 00:14:40.999 "unmap": true, 00:14:40.999 "flush": true, 00:14:40.999 "reset": true, 00:14:40.999 "nvme_admin": false, 00:14:40.999 "nvme_io": false, 00:14:40.999 "nvme_io_md": false, 00:14:40.999 "write_zeroes": true, 00:14:40.999 "zcopy": true, 00:14:40.999 "get_zone_info": false, 00:14:40.999 "zone_management": false, 00:14:40.999 "zone_append": false, 00:14:40.999 "compare": false, 00:14:40.999 "compare_and_write": false, 00:14:40.999 "abort": true, 00:14:40.999 "seek_hole": false, 00:14:40.999 "seek_data": false, 00:14:40.999 "copy": true, 00:14:40.999 "nvme_iov_md": false 00:14:40.999 }, 00:14:40.999 "memory_domains": [ 00:14:40.999 { 00:14:40.999 "dma_device_id": "system", 00:14:40.999 "dma_device_type": 1 00:14:40.999 }, 00:14:40.999 { 00:14:40.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.999 "dma_device_type": 2 00:14:40.999 } 00:14:40.999 ], 00:14:40.999 "driver_specific": {} 00:14:40.999 } 00:14:40.999 ] 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:40.999 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:41.000 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.000 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.000 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.000 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.000 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.000 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.000 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.000 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.000 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.000 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.000 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.000 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.000 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.000 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.000 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.000 "name": "Existed_Raid", 00:14:41.000 "uuid": "af4a8b14-1754-415f-bd45-a10e9d4159e1", 00:14:41.000 "strip_size_kb": 64, 00:14:41.000 "state": "online", 00:14:41.000 "raid_level": "raid5f", 00:14:41.000 "superblock": false, 00:14:41.000 "num_base_bdevs": 3, 00:14:41.000 "num_base_bdevs_discovered": 3, 00:14:41.000 "num_base_bdevs_operational": 3, 00:14:41.000 "base_bdevs_list": [ 00:14:41.000 { 00:14:41.000 "name": "BaseBdev1", 00:14:41.000 "uuid": "f55f3569-79df-4ce7-b35a-baa6402c152a", 00:14:41.000 "is_configured": true, 00:14:41.000 "data_offset": 0, 00:14:41.000 "data_size": 65536 00:14:41.000 }, 00:14:41.000 { 00:14:41.000 "name": "BaseBdev2", 00:14:41.000 "uuid": "96648e02-9084-4ea3-892b-8065585d4fcb", 00:14:41.000 "is_configured": true, 00:14:41.000 "data_offset": 0, 00:14:41.000 "data_size": 65536 00:14:41.000 }, 00:14:41.000 { 00:14:41.000 "name": "BaseBdev3", 00:14:41.000 "uuid": "273b837e-89b3-4cec-b7f8-e63f9e286a98", 00:14:41.000 "is_configured": true, 00:14:41.000 "data_offset": 0, 00:14:41.000 "data_size": 65536 00:14:41.000 } 00:14:41.000 ] 00:14:41.000 }' 00:14:41.000 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.000 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.569 [2024-11-27 11:53:07.688273] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:41.569 "name": "Existed_Raid", 00:14:41.569 "aliases": [ 00:14:41.569 "af4a8b14-1754-415f-bd45-a10e9d4159e1" 00:14:41.569 ], 00:14:41.569 "product_name": "Raid Volume", 00:14:41.569 "block_size": 512, 00:14:41.569 "num_blocks": 131072, 00:14:41.569 "uuid": "af4a8b14-1754-415f-bd45-a10e9d4159e1", 00:14:41.569 "assigned_rate_limits": { 00:14:41.569 "rw_ios_per_sec": 0, 00:14:41.569 "rw_mbytes_per_sec": 0, 00:14:41.569 "r_mbytes_per_sec": 0, 00:14:41.569 "w_mbytes_per_sec": 0 00:14:41.569 }, 00:14:41.569 "claimed": false, 00:14:41.569 "zoned": false, 00:14:41.569 "supported_io_types": { 00:14:41.569 "read": true, 00:14:41.569 "write": true, 00:14:41.569 "unmap": false, 00:14:41.569 "flush": false, 00:14:41.569 "reset": true, 00:14:41.569 "nvme_admin": false, 00:14:41.569 "nvme_io": false, 00:14:41.569 "nvme_io_md": false, 00:14:41.569 "write_zeroes": true, 00:14:41.569 "zcopy": false, 00:14:41.569 "get_zone_info": false, 00:14:41.569 "zone_management": false, 00:14:41.569 "zone_append": false, 00:14:41.569 "compare": false, 00:14:41.569 "compare_and_write": false, 00:14:41.569 "abort": false, 00:14:41.569 "seek_hole": false, 00:14:41.569 "seek_data": false, 00:14:41.569 "copy": false, 00:14:41.569 "nvme_iov_md": false 00:14:41.569 }, 00:14:41.569 "driver_specific": { 00:14:41.569 "raid": { 00:14:41.569 "uuid": "af4a8b14-1754-415f-bd45-a10e9d4159e1", 00:14:41.569 "strip_size_kb": 64, 00:14:41.569 "state": "online", 00:14:41.569 "raid_level": "raid5f", 00:14:41.569 "superblock": false, 00:14:41.569 "num_base_bdevs": 3, 00:14:41.569 "num_base_bdevs_discovered": 3, 00:14:41.569 "num_base_bdevs_operational": 3, 00:14:41.569 "base_bdevs_list": [ 00:14:41.569 { 00:14:41.569 "name": "BaseBdev1", 00:14:41.569 "uuid": "f55f3569-79df-4ce7-b35a-baa6402c152a", 00:14:41.569 "is_configured": true, 00:14:41.569 "data_offset": 0, 00:14:41.569 "data_size": 65536 00:14:41.569 }, 00:14:41.569 { 00:14:41.569 "name": "BaseBdev2", 00:14:41.569 "uuid": "96648e02-9084-4ea3-892b-8065585d4fcb", 00:14:41.569 "is_configured": true, 00:14:41.569 "data_offset": 0, 00:14:41.569 "data_size": 65536 00:14:41.569 }, 00:14:41.569 { 00:14:41.569 "name": "BaseBdev3", 00:14:41.569 "uuid": "273b837e-89b3-4cec-b7f8-e63f9e286a98", 00:14:41.569 "is_configured": true, 00:14:41.569 "data_offset": 0, 00:14:41.569 "data_size": 65536 00:14:41.569 } 00:14:41.569 ] 00:14:41.569 } 00:14:41.569 } 00:14:41.569 }' 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:41.569 BaseBdev2 00:14:41.569 BaseBdev3' 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.569 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.570 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:41.570 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.570 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.570 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.570 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.570 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.570 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.570 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.570 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:41.570 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.570 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.570 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.829 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.830 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.830 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.830 11:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:41.830 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.830 11:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.830 [2024-11-27 11:53:07.995623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.830 "name": "Existed_Raid", 00:14:41.830 "uuid": "af4a8b14-1754-415f-bd45-a10e9d4159e1", 00:14:41.830 "strip_size_kb": 64, 00:14:41.830 "state": "online", 00:14:41.830 "raid_level": "raid5f", 00:14:41.830 "superblock": false, 00:14:41.830 "num_base_bdevs": 3, 00:14:41.830 "num_base_bdevs_discovered": 2, 00:14:41.830 "num_base_bdevs_operational": 2, 00:14:41.830 "base_bdevs_list": [ 00:14:41.830 { 00:14:41.830 "name": null, 00:14:41.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.830 "is_configured": false, 00:14:41.830 "data_offset": 0, 00:14:41.830 "data_size": 65536 00:14:41.830 }, 00:14:41.830 { 00:14:41.830 "name": "BaseBdev2", 00:14:41.830 "uuid": "96648e02-9084-4ea3-892b-8065585d4fcb", 00:14:41.830 "is_configured": true, 00:14:41.830 "data_offset": 0, 00:14:41.830 "data_size": 65536 00:14:41.830 }, 00:14:41.830 { 00:14:41.830 "name": "BaseBdev3", 00:14:41.830 "uuid": "273b837e-89b3-4cec-b7f8-e63f9e286a98", 00:14:41.830 "is_configured": true, 00:14:41.830 "data_offset": 0, 00:14:41.830 "data_size": 65536 00:14:41.830 } 00:14:41.830 ] 00:14:41.830 }' 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.830 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.398 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:42.398 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:42.398 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.398 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:42.398 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.398 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.398 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.398 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:42.398 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:42.398 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:42.398 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.398 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.398 [2024-11-27 11:53:08.674821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:42.398 [2024-11-27 11:53:08.674949] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:42.657 [2024-11-27 11:53:08.790767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.657 [2024-11-27 11:53:08.846755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:42.657 [2024-11-27 11:53:08.846813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.657 11:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.657 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:42.657 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:42.657 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:42.657 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:42.657 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:42.657 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:42.657 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.657 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.927 BaseBdev2 00:14:42.927 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.928 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:42.928 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:42.928 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:42.928 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:42.928 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:42.928 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:42.928 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:42.928 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.928 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.928 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.928 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:42.928 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.928 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.928 [ 00:14:42.928 { 00:14:42.928 "name": "BaseBdev2", 00:14:42.929 "aliases": [ 00:14:42.929 "dfc0d42b-4875-40c3-8e70-502eb3171194" 00:14:42.929 ], 00:14:42.929 "product_name": "Malloc disk", 00:14:42.929 "block_size": 512, 00:14:42.929 "num_blocks": 65536, 00:14:42.929 "uuid": "dfc0d42b-4875-40c3-8e70-502eb3171194", 00:14:42.929 "assigned_rate_limits": { 00:14:42.929 "rw_ios_per_sec": 0, 00:14:42.929 "rw_mbytes_per_sec": 0, 00:14:42.929 "r_mbytes_per_sec": 0, 00:14:42.929 "w_mbytes_per_sec": 0 00:14:42.929 }, 00:14:42.929 "claimed": false, 00:14:42.929 "zoned": false, 00:14:42.929 "supported_io_types": { 00:14:42.929 "read": true, 00:14:42.929 "write": true, 00:14:42.929 "unmap": true, 00:14:42.929 "flush": true, 00:14:42.929 "reset": true, 00:14:42.929 "nvme_admin": false, 00:14:42.929 "nvme_io": false, 00:14:42.929 "nvme_io_md": false, 00:14:42.929 "write_zeroes": true, 00:14:42.929 "zcopy": true, 00:14:42.929 "get_zone_info": false, 00:14:42.929 "zone_management": false, 00:14:42.929 "zone_append": false, 00:14:42.929 "compare": false, 00:14:42.929 "compare_and_write": false, 00:14:42.929 "abort": true, 00:14:42.929 "seek_hole": false, 00:14:42.929 "seek_data": false, 00:14:42.929 "copy": true, 00:14:42.929 "nvme_iov_md": false 00:14:42.929 }, 00:14:42.929 "memory_domains": [ 00:14:42.929 { 00:14:42.929 "dma_device_id": "system", 00:14:42.929 "dma_device_type": 1 00:14:42.929 }, 00:14:42.929 { 00:14:42.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.929 "dma_device_type": 2 00:14:42.929 } 00:14:42.929 ], 00:14:42.929 "driver_specific": {} 00:14:42.929 } 00:14:42.929 ] 00:14:42.929 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.929 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:42.930 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:42.930 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:42.930 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:42.930 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.930 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.930 BaseBdev3 00:14:42.930 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.930 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:42.930 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:42.930 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:42.930 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:42.930 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:42.930 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:42.930 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:42.930 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.930 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.934 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.935 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:42.935 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.935 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.935 [ 00:14:42.935 { 00:14:42.935 "name": "BaseBdev3", 00:14:42.935 "aliases": [ 00:14:42.935 "8b416a30-2a81-4070-8525-e776969f6157" 00:14:42.935 ], 00:14:42.935 "product_name": "Malloc disk", 00:14:42.935 "block_size": 512, 00:14:42.935 "num_blocks": 65536, 00:14:42.935 "uuid": "8b416a30-2a81-4070-8525-e776969f6157", 00:14:42.935 "assigned_rate_limits": { 00:14:42.935 "rw_ios_per_sec": 0, 00:14:42.935 "rw_mbytes_per_sec": 0, 00:14:42.935 "r_mbytes_per_sec": 0, 00:14:42.935 "w_mbytes_per_sec": 0 00:14:42.935 }, 00:14:42.935 "claimed": false, 00:14:42.935 "zoned": false, 00:14:42.935 "supported_io_types": { 00:14:42.935 "read": true, 00:14:42.935 "write": true, 00:14:42.935 "unmap": true, 00:14:42.935 "flush": true, 00:14:42.935 "reset": true, 00:14:42.935 "nvme_admin": false, 00:14:42.935 "nvme_io": false, 00:14:42.935 "nvme_io_md": false, 00:14:42.935 "write_zeroes": true, 00:14:42.935 "zcopy": true, 00:14:42.935 "get_zone_info": false, 00:14:42.935 "zone_management": false, 00:14:42.935 "zone_append": false, 00:14:42.935 "compare": false, 00:14:42.935 "compare_and_write": false, 00:14:42.935 "abort": true, 00:14:42.935 "seek_hole": false, 00:14:42.935 "seek_data": false, 00:14:42.935 "copy": true, 00:14:42.935 "nvme_iov_md": false 00:14:42.935 }, 00:14:42.935 "memory_domains": [ 00:14:42.935 { 00:14:42.935 "dma_device_id": "system", 00:14:42.935 "dma_device_type": 1 00:14:42.935 }, 00:14:42.935 { 00:14:42.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.936 "dma_device_type": 2 00:14:42.936 } 00:14:42.936 ], 00:14:42.936 "driver_specific": {} 00:14:42.936 } 00:14:42.936 ] 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.936 [2024-11-27 11:53:09.205380] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:42.936 [2024-11-27 11:53:09.205499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:42.936 [2024-11-27 11:53:09.205565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:42.936 [2024-11-27 11:53:09.207788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.936 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.937 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.937 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.937 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.937 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.937 "name": "Existed_Raid", 00:14:42.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.937 "strip_size_kb": 64, 00:14:42.937 "state": "configuring", 00:14:42.937 "raid_level": "raid5f", 00:14:42.937 "superblock": false, 00:14:42.937 "num_base_bdevs": 3, 00:14:42.937 "num_base_bdevs_discovered": 2, 00:14:42.937 "num_base_bdevs_operational": 3, 00:14:42.937 "base_bdevs_list": [ 00:14:42.937 { 00:14:42.937 "name": "BaseBdev1", 00:14:42.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.937 "is_configured": false, 00:14:42.937 "data_offset": 0, 00:14:42.937 "data_size": 0 00:14:42.937 }, 00:14:42.937 { 00:14:42.937 "name": "BaseBdev2", 00:14:42.937 "uuid": "dfc0d42b-4875-40c3-8e70-502eb3171194", 00:14:42.937 "is_configured": true, 00:14:42.937 "data_offset": 0, 00:14:42.937 "data_size": 65536 00:14:42.937 }, 00:14:42.937 { 00:14:42.937 "name": "BaseBdev3", 00:14:42.937 "uuid": "8b416a30-2a81-4070-8525-e776969f6157", 00:14:42.937 "is_configured": true, 00:14:42.937 "data_offset": 0, 00:14:42.937 "data_size": 65536 00:14:42.937 } 00:14:42.937 ] 00:14:42.937 }' 00:14:42.937 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.937 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.512 [2024-11-27 11:53:09.664627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.512 "name": "Existed_Raid", 00:14:43.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.512 "strip_size_kb": 64, 00:14:43.512 "state": "configuring", 00:14:43.512 "raid_level": "raid5f", 00:14:43.512 "superblock": false, 00:14:43.512 "num_base_bdevs": 3, 00:14:43.512 "num_base_bdevs_discovered": 1, 00:14:43.512 "num_base_bdevs_operational": 3, 00:14:43.512 "base_bdevs_list": [ 00:14:43.512 { 00:14:43.512 "name": "BaseBdev1", 00:14:43.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.512 "is_configured": false, 00:14:43.512 "data_offset": 0, 00:14:43.512 "data_size": 0 00:14:43.512 }, 00:14:43.512 { 00:14:43.512 "name": null, 00:14:43.512 "uuid": "dfc0d42b-4875-40c3-8e70-502eb3171194", 00:14:43.512 "is_configured": false, 00:14:43.512 "data_offset": 0, 00:14:43.512 "data_size": 65536 00:14:43.512 }, 00:14:43.512 { 00:14:43.512 "name": "BaseBdev3", 00:14:43.512 "uuid": "8b416a30-2a81-4070-8525-e776969f6157", 00:14:43.512 "is_configured": true, 00:14:43.512 "data_offset": 0, 00:14:43.512 "data_size": 65536 00:14:43.512 } 00:14:43.512 ] 00:14:43.512 }' 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.512 11:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.771 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.771 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:43.771 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.771 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.771 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.031 [2024-11-27 11:53:10.210158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.031 BaseBdev1 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.031 [ 00:14:44.031 { 00:14:44.031 "name": "BaseBdev1", 00:14:44.031 "aliases": [ 00:14:44.031 "a1bb3401-1d4c-4786-993d-c00d8d926b97" 00:14:44.031 ], 00:14:44.031 "product_name": "Malloc disk", 00:14:44.031 "block_size": 512, 00:14:44.031 "num_blocks": 65536, 00:14:44.031 "uuid": "a1bb3401-1d4c-4786-993d-c00d8d926b97", 00:14:44.031 "assigned_rate_limits": { 00:14:44.031 "rw_ios_per_sec": 0, 00:14:44.031 "rw_mbytes_per_sec": 0, 00:14:44.031 "r_mbytes_per_sec": 0, 00:14:44.031 "w_mbytes_per_sec": 0 00:14:44.031 }, 00:14:44.031 "claimed": true, 00:14:44.031 "claim_type": "exclusive_write", 00:14:44.031 "zoned": false, 00:14:44.031 "supported_io_types": { 00:14:44.031 "read": true, 00:14:44.031 "write": true, 00:14:44.031 "unmap": true, 00:14:44.031 "flush": true, 00:14:44.031 "reset": true, 00:14:44.031 "nvme_admin": false, 00:14:44.031 "nvme_io": false, 00:14:44.031 "nvme_io_md": false, 00:14:44.031 "write_zeroes": true, 00:14:44.031 "zcopy": true, 00:14:44.031 "get_zone_info": false, 00:14:44.031 "zone_management": false, 00:14:44.031 "zone_append": false, 00:14:44.031 "compare": false, 00:14:44.031 "compare_and_write": false, 00:14:44.031 "abort": true, 00:14:44.031 "seek_hole": false, 00:14:44.031 "seek_data": false, 00:14:44.031 "copy": true, 00:14:44.031 "nvme_iov_md": false 00:14:44.031 }, 00:14:44.031 "memory_domains": [ 00:14:44.031 { 00:14:44.031 "dma_device_id": "system", 00:14:44.031 "dma_device_type": 1 00:14:44.031 }, 00:14:44.031 { 00:14:44.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.031 "dma_device_type": 2 00:14:44.031 } 00:14:44.031 ], 00:14:44.031 "driver_specific": {} 00:14:44.031 } 00:14:44.031 ] 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.031 "name": "Existed_Raid", 00:14:44.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.031 "strip_size_kb": 64, 00:14:44.031 "state": "configuring", 00:14:44.031 "raid_level": "raid5f", 00:14:44.031 "superblock": false, 00:14:44.031 "num_base_bdevs": 3, 00:14:44.031 "num_base_bdevs_discovered": 2, 00:14:44.031 "num_base_bdevs_operational": 3, 00:14:44.031 "base_bdevs_list": [ 00:14:44.031 { 00:14:44.031 "name": "BaseBdev1", 00:14:44.031 "uuid": "a1bb3401-1d4c-4786-993d-c00d8d926b97", 00:14:44.031 "is_configured": true, 00:14:44.031 "data_offset": 0, 00:14:44.031 "data_size": 65536 00:14:44.031 }, 00:14:44.031 { 00:14:44.031 "name": null, 00:14:44.031 "uuid": "dfc0d42b-4875-40c3-8e70-502eb3171194", 00:14:44.031 "is_configured": false, 00:14:44.031 "data_offset": 0, 00:14:44.031 "data_size": 65536 00:14:44.031 }, 00:14:44.031 { 00:14:44.031 "name": "BaseBdev3", 00:14:44.031 "uuid": "8b416a30-2a81-4070-8525-e776969f6157", 00:14:44.031 "is_configured": true, 00:14:44.031 "data_offset": 0, 00:14:44.031 "data_size": 65536 00:14:44.031 } 00:14:44.031 ] 00:14:44.031 }' 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.031 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.290 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.290 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.290 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:44.290 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.550 [2024-11-27 11:53:10.717497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.550 "name": "Existed_Raid", 00:14:44.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.550 "strip_size_kb": 64, 00:14:44.550 "state": "configuring", 00:14:44.550 "raid_level": "raid5f", 00:14:44.550 "superblock": false, 00:14:44.550 "num_base_bdevs": 3, 00:14:44.550 "num_base_bdevs_discovered": 1, 00:14:44.550 "num_base_bdevs_operational": 3, 00:14:44.550 "base_bdevs_list": [ 00:14:44.550 { 00:14:44.550 "name": "BaseBdev1", 00:14:44.550 "uuid": "a1bb3401-1d4c-4786-993d-c00d8d926b97", 00:14:44.550 "is_configured": true, 00:14:44.550 "data_offset": 0, 00:14:44.550 "data_size": 65536 00:14:44.550 }, 00:14:44.550 { 00:14:44.550 "name": null, 00:14:44.550 "uuid": "dfc0d42b-4875-40c3-8e70-502eb3171194", 00:14:44.550 "is_configured": false, 00:14:44.550 "data_offset": 0, 00:14:44.550 "data_size": 65536 00:14:44.550 }, 00:14:44.550 { 00:14:44.550 "name": null, 00:14:44.550 "uuid": "8b416a30-2a81-4070-8525-e776969f6157", 00:14:44.550 "is_configured": false, 00:14:44.550 "data_offset": 0, 00:14:44.550 "data_size": 65536 00:14:44.550 } 00:14:44.550 ] 00:14:44.550 }' 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.550 11:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.118 [2024-11-27 11:53:11.268613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.118 "name": "Existed_Raid", 00:14:45.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.118 "strip_size_kb": 64, 00:14:45.118 "state": "configuring", 00:14:45.118 "raid_level": "raid5f", 00:14:45.118 "superblock": false, 00:14:45.118 "num_base_bdevs": 3, 00:14:45.118 "num_base_bdevs_discovered": 2, 00:14:45.118 "num_base_bdevs_operational": 3, 00:14:45.118 "base_bdevs_list": [ 00:14:45.118 { 00:14:45.118 "name": "BaseBdev1", 00:14:45.118 "uuid": "a1bb3401-1d4c-4786-993d-c00d8d926b97", 00:14:45.118 "is_configured": true, 00:14:45.118 "data_offset": 0, 00:14:45.118 "data_size": 65536 00:14:45.118 }, 00:14:45.118 { 00:14:45.118 "name": null, 00:14:45.118 "uuid": "dfc0d42b-4875-40c3-8e70-502eb3171194", 00:14:45.118 "is_configured": false, 00:14:45.118 "data_offset": 0, 00:14:45.118 "data_size": 65536 00:14:45.118 }, 00:14:45.118 { 00:14:45.118 "name": "BaseBdev3", 00:14:45.118 "uuid": "8b416a30-2a81-4070-8525-e776969f6157", 00:14:45.118 "is_configured": true, 00:14:45.118 "data_offset": 0, 00:14:45.118 "data_size": 65536 00:14:45.118 } 00:14:45.118 ] 00:14:45.118 }' 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.118 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.377 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.377 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:45.377 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.377 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.636 [2024-11-27 11:53:11.791742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.636 "name": "Existed_Raid", 00:14:45.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.636 "strip_size_kb": 64, 00:14:45.636 "state": "configuring", 00:14:45.636 "raid_level": "raid5f", 00:14:45.636 "superblock": false, 00:14:45.636 "num_base_bdevs": 3, 00:14:45.636 "num_base_bdevs_discovered": 1, 00:14:45.636 "num_base_bdevs_operational": 3, 00:14:45.636 "base_bdevs_list": [ 00:14:45.636 { 00:14:45.636 "name": null, 00:14:45.636 "uuid": "a1bb3401-1d4c-4786-993d-c00d8d926b97", 00:14:45.636 "is_configured": false, 00:14:45.636 "data_offset": 0, 00:14:45.636 "data_size": 65536 00:14:45.636 }, 00:14:45.636 { 00:14:45.636 "name": null, 00:14:45.636 "uuid": "dfc0d42b-4875-40c3-8e70-502eb3171194", 00:14:45.636 "is_configured": false, 00:14:45.636 "data_offset": 0, 00:14:45.636 "data_size": 65536 00:14:45.636 }, 00:14:45.636 { 00:14:45.636 "name": "BaseBdev3", 00:14:45.636 "uuid": "8b416a30-2a81-4070-8525-e776969f6157", 00:14:45.636 "is_configured": true, 00:14:45.636 "data_offset": 0, 00:14:45.636 "data_size": 65536 00:14:45.636 } 00:14:45.636 ] 00:14:45.636 }' 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.636 11:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.205 [2024-11-27 11:53:12.410004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.205 "name": "Existed_Raid", 00:14:46.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.205 "strip_size_kb": 64, 00:14:46.205 "state": "configuring", 00:14:46.205 "raid_level": "raid5f", 00:14:46.205 "superblock": false, 00:14:46.205 "num_base_bdevs": 3, 00:14:46.205 "num_base_bdevs_discovered": 2, 00:14:46.205 "num_base_bdevs_operational": 3, 00:14:46.205 "base_bdevs_list": [ 00:14:46.205 { 00:14:46.205 "name": null, 00:14:46.205 "uuid": "a1bb3401-1d4c-4786-993d-c00d8d926b97", 00:14:46.205 "is_configured": false, 00:14:46.205 "data_offset": 0, 00:14:46.205 "data_size": 65536 00:14:46.205 }, 00:14:46.205 { 00:14:46.205 "name": "BaseBdev2", 00:14:46.205 "uuid": "dfc0d42b-4875-40c3-8e70-502eb3171194", 00:14:46.205 "is_configured": true, 00:14:46.205 "data_offset": 0, 00:14:46.205 "data_size": 65536 00:14:46.205 }, 00:14:46.205 { 00:14:46.205 "name": "BaseBdev3", 00:14:46.205 "uuid": "8b416a30-2a81-4070-8525-e776969f6157", 00:14:46.205 "is_configured": true, 00:14:46.205 "data_offset": 0, 00:14:46.205 "data_size": 65536 00:14:46.205 } 00:14:46.205 ] 00:14:46.205 }' 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.205 11:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.774 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.775 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:46.775 11:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.775 11:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.775 11:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.775 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:46.775 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.775 11:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.775 11:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:46.775 11:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.775 11:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a1bb3401-1d4c-4786-993d-c00d8d926b97 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.775 [2024-11-27 11:53:13.047956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:46.775 [2024-11-27 11:53:13.048095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:46.775 [2024-11-27 11:53:13.048128] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:46.775 [2024-11-27 11:53:13.048440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:46.775 [2024-11-27 11:53:13.055031] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:46.775 [2024-11-27 11:53:13.055094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:46.775 [2024-11-27 11:53:13.055450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.775 NewBaseBdev 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.775 [ 00:14:46.775 { 00:14:46.775 "name": "NewBaseBdev", 00:14:46.775 "aliases": [ 00:14:46.775 "a1bb3401-1d4c-4786-993d-c00d8d926b97" 00:14:46.775 ], 00:14:46.775 "product_name": "Malloc disk", 00:14:46.775 "block_size": 512, 00:14:46.775 "num_blocks": 65536, 00:14:46.775 "uuid": "a1bb3401-1d4c-4786-993d-c00d8d926b97", 00:14:46.775 "assigned_rate_limits": { 00:14:46.775 "rw_ios_per_sec": 0, 00:14:46.775 "rw_mbytes_per_sec": 0, 00:14:46.775 "r_mbytes_per_sec": 0, 00:14:46.775 "w_mbytes_per_sec": 0 00:14:46.775 }, 00:14:46.775 "claimed": true, 00:14:46.775 "claim_type": "exclusive_write", 00:14:46.775 "zoned": false, 00:14:46.775 "supported_io_types": { 00:14:46.775 "read": true, 00:14:46.775 "write": true, 00:14:46.775 "unmap": true, 00:14:46.775 "flush": true, 00:14:46.775 "reset": true, 00:14:46.775 "nvme_admin": false, 00:14:46.775 "nvme_io": false, 00:14:46.775 "nvme_io_md": false, 00:14:46.775 "write_zeroes": true, 00:14:46.775 "zcopy": true, 00:14:46.775 "get_zone_info": false, 00:14:46.775 "zone_management": false, 00:14:46.775 "zone_append": false, 00:14:46.775 "compare": false, 00:14:46.775 "compare_and_write": false, 00:14:46.775 "abort": true, 00:14:46.775 "seek_hole": false, 00:14:46.775 "seek_data": false, 00:14:46.775 "copy": true, 00:14:46.775 "nvme_iov_md": false 00:14:46.775 }, 00:14:46.775 "memory_domains": [ 00:14:46.775 { 00:14:46.775 "dma_device_id": "system", 00:14:46.775 "dma_device_type": 1 00:14:46.775 }, 00:14:46.775 { 00:14:46.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.775 "dma_device_type": 2 00:14:46.775 } 00:14:46.775 ], 00:14:46.775 "driver_specific": {} 00:14:46.775 } 00:14:46.775 ] 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.775 "name": "Existed_Raid", 00:14:46.775 "uuid": "0d9696ab-0dde-473b-93d2-839802b5a7eb", 00:14:46.775 "strip_size_kb": 64, 00:14:46.775 "state": "online", 00:14:46.775 "raid_level": "raid5f", 00:14:46.775 "superblock": false, 00:14:46.775 "num_base_bdevs": 3, 00:14:46.775 "num_base_bdevs_discovered": 3, 00:14:46.775 "num_base_bdevs_operational": 3, 00:14:46.775 "base_bdevs_list": [ 00:14:46.775 { 00:14:46.775 "name": "NewBaseBdev", 00:14:46.775 "uuid": "a1bb3401-1d4c-4786-993d-c00d8d926b97", 00:14:46.775 "is_configured": true, 00:14:46.775 "data_offset": 0, 00:14:46.775 "data_size": 65536 00:14:46.775 }, 00:14:46.775 { 00:14:46.775 "name": "BaseBdev2", 00:14:46.775 "uuid": "dfc0d42b-4875-40c3-8e70-502eb3171194", 00:14:46.775 "is_configured": true, 00:14:46.775 "data_offset": 0, 00:14:46.775 "data_size": 65536 00:14:46.775 }, 00:14:46.775 { 00:14:46.775 "name": "BaseBdev3", 00:14:46.775 "uuid": "8b416a30-2a81-4070-8525-e776969f6157", 00:14:46.775 "is_configured": true, 00:14:46.775 "data_offset": 0, 00:14:46.775 "data_size": 65536 00:14:46.775 } 00:14:46.775 ] 00:14:46.775 }' 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.775 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.343 [2024-11-27 11:53:13.566537] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:47.343 "name": "Existed_Raid", 00:14:47.343 "aliases": [ 00:14:47.343 "0d9696ab-0dde-473b-93d2-839802b5a7eb" 00:14:47.343 ], 00:14:47.343 "product_name": "Raid Volume", 00:14:47.343 "block_size": 512, 00:14:47.343 "num_blocks": 131072, 00:14:47.343 "uuid": "0d9696ab-0dde-473b-93d2-839802b5a7eb", 00:14:47.343 "assigned_rate_limits": { 00:14:47.343 "rw_ios_per_sec": 0, 00:14:47.343 "rw_mbytes_per_sec": 0, 00:14:47.343 "r_mbytes_per_sec": 0, 00:14:47.343 "w_mbytes_per_sec": 0 00:14:47.343 }, 00:14:47.343 "claimed": false, 00:14:47.343 "zoned": false, 00:14:47.343 "supported_io_types": { 00:14:47.343 "read": true, 00:14:47.343 "write": true, 00:14:47.343 "unmap": false, 00:14:47.343 "flush": false, 00:14:47.343 "reset": true, 00:14:47.343 "nvme_admin": false, 00:14:47.343 "nvme_io": false, 00:14:47.343 "nvme_io_md": false, 00:14:47.343 "write_zeroes": true, 00:14:47.343 "zcopy": false, 00:14:47.343 "get_zone_info": false, 00:14:47.343 "zone_management": false, 00:14:47.343 "zone_append": false, 00:14:47.343 "compare": false, 00:14:47.343 "compare_and_write": false, 00:14:47.343 "abort": false, 00:14:47.343 "seek_hole": false, 00:14:47.343 "seek_data": false, 00:14:47.343 "copy": false, 00:14:47.343 "nvme_iov_md": false 00:14:47.343 }, 00:14:47.343 "driver_specific": { 00:14:47.343 "raid": { 00:14:47.343 "uuid": "0d9696ab-0dde-473b-93d2-839802b5a7eb", 00:14:47.343 "strip_size_kb": 64, 00:14:47.343 "state": "online", 00:14:47.343 "raid_level": "raid5f", 00:14:47.343 "superblock": false, 00:14:47.343 "num_base_bdevs": 3, 00:14:47.343 "num_base_bdevs_discovered": 3, 00:14:47.343 "num_base_bdevs_operational": 3, 00:14:47.343 "base_bdevs_list": [ 00:14:47.343 { 00:14:47.343 "name": "NewBaseBdev", 00:14:47.343 "uuid": "a1bb3401-1d4c-4786-993d-c00d8d926b97", 00:14:47.343 "is_configured": true, 00:14:47.343 "data_offset": 0, 00:14:47.343 "data_size": 65536 00:14:47.343 }, 00:14:47.343 { 00:14:47.343 "name": "BaseBdev2", 00:14:47.343 "uuid": "dfc0d42b-4875-40c3-8e70-502eb3171194", 00:14:47.343 "is_configured": true, 00:14:47.343 "data_offset": 0, 00:14:47.343 "data_size": 65536 00:14:47.343 }, 00:14:47.343 { 00:14:47.343 "name": "BaseBdev3", 00:14:47.343 "uuid": "8b416a30-2a81-4070-8525-e776969f6157", 00:14:47.343 "is_configured": true, 00:14:47.343 "data_offset": 0, 00:14:47.343 "data_size": 65536 00:14:47.343 } 00:14:47.343 ] 00:14:47.343 } 00:14:47.343 } 00:14:47.343 }' 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:47.343 BaseBdev2 00:14:47.343 BaseBdev3' 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.343 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.602 [2024-11-27 11:53:13.865830] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:47.602 [2024-11-27 11:53:13.865928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:47.602 [2024-11-27 11:53:13.866044] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.602 [2024-11-27 11:53:13.866389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:47.602 [2024-11-27 11:53:13.866458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79908 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79908 ']' 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79908 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79908 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.602 killing process with pid 79908 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79908' 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79908 00:14:47.602 [2024-11-27 11:53:13.914973] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:47.602 11:53:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79908 00:14:48.170 [2024-11-27 11:53:14.287491] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:49.545 00:14:49.545 real 0m11.721s 00:14:49.545 user 0m18.415s 00:14:49.545 sys 0m2.156s 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.545 ************************************ 00:14:49.545 END TEST raid5f_state_function_test 00:14:49.545 ************************************ 00:14:49.545 11:53:15 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:49.545 11:53:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:49.545 11:53:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.545 11:53:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:49.545 ************************************ 00:14:49.545 START TEST raid5f_state_function_test_sb 00:14:49.545 ************************************ 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:49.545 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:49.546 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:49.546 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:49.546 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80535 00:14:49.546 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:49.546 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80535' 00:14:49.546 Process raid pid: 80535 00:14:49.546 11:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80535 00:14:49.546 11:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80535 ']' 00:14:49.546 11:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.546 11:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.546 11:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.546 11:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.546 11:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.546 [2024-11-27 11:53:15.827054] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:14:49.546 [2024-11-27 11:53:15.827189] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.804 [2024-11-27 11:53:16.009195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.804 [2024-11-27 11:53:16.144358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.063 [2024-11-27 11:53:16.395012] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.063 [2024-11-27 11:53:16.395157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.630 11:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.630 11:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:50.630 11:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:50.630 11:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.630 11:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.630 [2024-11-27 11:53:16.757603] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.630 [2024-11-27 11:53:16.757674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.630 [2024-11-27 11:53:16.757687] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.630 [2024-11-27 11:53:16.757698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.630 [2024-11-27 11:53:16.757711] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:50.630 [2024-11-27 11:53:16.757723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:50.630 11:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.630 11:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:50.630 11:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.630 11:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.630 11:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.630 11:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.630 11:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.630 11:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.631 11:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.631 11:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.631 11:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.631 11:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.631 11:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.631 11:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.631 11:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.631 11:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.631 11:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.631 "name": "Existed_Raid", 00:14:50.631 "uuid": "8b91546e-d145-428d-a13c-eec6bf30401c", 00:14:50.631 "strip_size_kb": 64, 00:14:50.631 "state": "configuring", 00:14:50.631 "raid_level": "raid5f", 00:14:50.631 "superblock": true, 00:14:50.631 "num_base_bdevs": 3, 00:14:50.631 "num_base_bdevs_discovered": 0, 00:14:50.631 "num_base_bdevs_operational": 3, 00:14:50.631 "base_bdevs_list": [ 00:14:50.631 { 00:14:50.631 "name": "BaseBdev1", 00:14:50.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.631 "is_configured": false, 00:14:50.631 "data_offset": 0, 00:14:50.631 "data_size": 0 00:14:50.631 }, 00:14:50.631 { 00:14:50.631 "name": "BaseBdev2", 00:14:50.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.631 "is_configured": false, 00:14:50.631 "data_offset": 0, 00:14:50.631 "data_size": 0 00:14:50.631 }, 00:14:50.631 { 00:14:50.631 "name": "BaseBdev3", 00:14:50.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.631 "is_configured": false, 00:14:50.631 "data_offset": 0, 00:14:50.631 "data_size": 0 00:14:50.631 } 00:14:50.631 ] 00:14:50.631 }' 00:14:50.631 11:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.631 11:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.890 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:50.890 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.890 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.890 [2024-11-27 11:53:17.236776] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:50.890 [2024-11-27 11:53:17.236896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:50.890 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.890 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:50.890 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.890 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.890 [2024-11-27 11:53:17.248760] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.890 [2024-11-27 11:53:17.248868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.890 [2024-11-27 11:53:17.248903] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.890 [2024-11-27 11:53:17.248933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.890 [2024-11-27 11:53:17.248956] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:50.890 [2024-11-27 11:53:17.248981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:50.890 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.890 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:50.890 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.890 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.148 [2024-11-27 11:53:17.305422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.148 BaseBdev1 00:14:51.148 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.148 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:51.148 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:51.148 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:51.148 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:51.148 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:51.148 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:51.148 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:51.148 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.148 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.148 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.148 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:51.148 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.148 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.148 [ 00:14:51.148 { 00:14:51.148 "name": "BaseBdev1", 00:14:51.148 "aliases": [ 00:14:51.148 "14600eba-ed0b-46b0-bdc6-ecdb7d8e2d2f" 00:14:51.148 ], 00:14:51.148 "product_name": "Malloc disk", 00:14:51.149 "block_size": 512, 00:14:51.149 "num_blocks": 65536, 00:14:51.149 "uuid": "14600eba-ed0b-46b0-bdc6-ecdb7d8e2d2f", 00:14:51.149 "assigned_rate_limits": { 00:14:51.149 "rw_ios_per_sec": 0, 00:14:51.149 "rw_mbytes_per_sec": 0, 00:14:51.149 "r_mbytes_per_sec": 0, 00:14:51.149 "w_mbytes_per_sec": 0 00:14:51.149 }, 00:14:51.149 "claimed": true, 00:14:51.149 "claim_type": "exclusive_write", 00:14:51.149 "zoned": false, 00:14:51.149 "supported_io_types": { 00:14:51.149 "read": true, 00:14:51.149 "write": true, 00:14:51.149 "unmap": true, 00:14:51.149 "flush": true, 00:14:51.149 "reset": true, 00:14:51.149 "nvme_admin": false, 00:14:51.149 "nvme_io": false, 00:14:51.149 "nvme_io_md": false, 00:14:51.149 "write_zeroes": true, 00:14:51.149 "zcopy": true, 00:14:51.149 "get_zone_info": false, 00:14:51.149 "zone_management": false, 00:14:51.149 "zone_append": false, 00:14:51.149 "compare": false, 00:14:51.149 "compare_and_write": false, 00:14:51.149 "abort": true, 00:14:51.149 "seek_hole": false, 00:14:51.149 "seek_data": false, 00:14:51.149 "copy": true, 00:14:51.149 "nvme_iov_md": false 00:14:51.149 }, 00:14:51.149 "memory_domains": [ 00:14:51.149 { 00:14:51.149 "dma_device_id": "system", 00:14:51.149 "dma_device_type": 1 00:14:51.149 }, 00:14:51.149 { 00:14:51.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.149 "dma_device_type": 2 00:14:51.149 } 00:14:51.149 ], 00:14:51.149 "driver_specific": {} 00:14:51.149 } 00:14:51.149 ] 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.149 "name": "Existed_Raid", 00:14:51.149 "uuid": "7b7ddc88-0eea-4aa6-bbe8-47c155faae0a", 00:14:51.149 "strip_size_kb": 64, 00:14:51.149 "state": "configuring", 00:14:51.149 "raid_level": "raid5f", 00:14:51.149 "superblock": true, 00:14:51.149 "num_base_bdevs": 3, 00:14:51.149 "num_base_bdevs_discovered": 1, 00:14:51.149 "num_base_bdevs_operational": 3, 00:14:51.149 "base_bdevs_list": [ 00:14:51.149 { 00:14:51.149 "name": "BaseBdev1", 00:14:51.149 "uuid": "14600eba-ed0b-46b0-bdc6-ecdb7d8e2d2f", 00:14:51.149 "is_configured": true, 00:14:51.149 "data_offset": 2048, 00:14:51.149 "data_size": 63488 00:14:51.149 }, 00:14:51.149 { 00:14:51.149 "name": "BaseBdev2", 00:14:51.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.149 "is_configured": false, 00:14:51.149 "data_offset": 0, 00:14:51.149 "data_size": 0 00:14:51.149 }, 00:14:51.149 { 00:14:51.149 "name": "BaseBdev3", 00:14:51.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.149 "is_configured": false, 00:14:51.149 "data_offset": 0, 00:14:51.149 "data_size": 0 00:14:51.149 } 00:14:51.149 ] 00:14:51.149 }' 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.149 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.717 [2024-11-27 11:53:17.800682] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:51.717 [2024-11-27 11:53:17.800748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.717 [2024-11-27 11:53:17.812715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.717 [2024-11-27 11:53:17.814862] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:51.717 [2024-11-27 11:53:17.814908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:51.717 [2024-11-27 11:53:17.814920] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:51.717 [2024-11-27 11:53:17.814930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.717 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.717 "name": "Existed_Raid", 00:14:51.717 "uuid": "1ce8bec3-a658-4a06-a61a-6c5a35943274", 00:14:51.717 "strip_size_kb": 64, 00:14:51.717 "state": "configuring", 00:14:51.717 "raid_level": "raid5f", 00:14:51.717 "superblock": true, 00:14:51.717 "num_base_bdevs": 3, 00:14:51.717 "num_base_bdevs_discovered": 1, 00:14:51.717 "num_base_bdevs_operational": 3, 00:14:51.717 "base_bdevs_list": [ 00:14:51.717 { 00:14:51.717 "name": "BaseBdev1", 00:14:51.717 "uuid": "14600eba-ed0b-46b0-bdc6-ecdb7d8e2d2f", 00:14:51.717 "is_configured": true, 00:14:51.717 "data_offset": 2048, 00:14:51.718 "data_size": 63488 00:14:51.718 }, 00:14:51.718 { 00:14:51.718 "name": "BaseBdev2", 00:14:51.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.718 "is_configured": false, 00:14:51.718 "data_offset": 0, 00:14:51.718 "data_size": 0 00:14:51.718 }, 00:14:51.718 { 00:14:51.718 "name": "BaseBdev3", 00:14:51.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.718 "is_configured": false, 00:14:51.718 "data_offset": 0, 00:14:51.718 "data_size": 0 00:14:51.718 } 00:14:51.718 ] 00:14:51.718 }' 00:14:51.718 11:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.718 11:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.977 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:51.977 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.977 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.977 [2024-11-27 11:53:18.319566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.977 BaseBdev2 00:14:51.977 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.977 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:51.977 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:51.977 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:51.977 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:51.977 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:51.977 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:51.977 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:51.977 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.977 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.977 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.977 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:51.977 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.977 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.977 [ 00:14:51.977 { 00:14:51.977 "name": "BaseBdev2", 00:14:51.977 "aliases": [ 00:14:51.977 "352b9d68-74aa-4b78-9dfd-09113126fcce" 00:14:51.977 ], 00:14:51.977 "product_name": "Malloc disk", 00:14:51.977 "block_size": 512, 00:14:51.977 "num_blocks": 65536, 00:14:51.977 "uuid": "352b9d68-74aa-4b78-9dfd-09113126fcce", 00:14:51.977 "assigned_rate_limits": { 00:14:51.977 "rw_ios_per_sec": 0, 00:14:51.977 "rw_mbytes_per_sec": 0, 00:14:51.977 "r_mbytes_per_sec": 0, 00:14:51.977 "w_mbytes_per_sec": 0 00:14:51.977 }, 00:14:51.977 "claimed": true, 00:14:51.977 "claim_type": "exclusive_write", 00:14:51.977 "zoned": false, 00:14:51.977 "supported_io_types": { 00:14:51.977 "read": true, 00:14:51.977 "write": true, 00:14:51.977 "unmap": true, 00:14:51.977 "flush": true, 00:14:51.977 "reset": true, 00:14:51.977 "nvme_admin": false, 00:14:51.977 "nvme_io": false, 00:14:51.977 "nvme_io_md": false, 00:14:51.977 "write_zeroes": true, 00:14:51.977 "zcopy": true, 00:14:51.977 "get_zone_info": false, 00:14:51.977 "zone_management": false, 00:14:51.977 "zone_append": false, 00:14:51.977 "compare": false, 00:14:51.977 "compare_and_write": false, 00:14:51.977 "abort": true, 00:14:51.977 "seek_hole": false, 00:14:51.977 "seek_data": false, 00:14:51.977 "copy": true, 00:14:51.977 "nvme_iov_md": false 00:14:51.977 }, 00:14:51.977 "memory_domains": [ 00:14:51.977 { 00:14:51.977 "dma_device_id": "system", 00:14:51.977 "dma_device_type": 1 00:14:51.977 }, 00:14:51.977 { 00:14:51.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.977 "dma_device_type": 2 00:14:51.977 } 00:14:51.977 ], 00:14:51.977 "driver_specific": {} 00:14:51.977 } 00:14:51.977 ] 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.236 "name": "Existed_Raid", 00:14:52.236 "uuid": "1ce8bec3-a658-4a06-a61a-6c5a35943274", 00:14:52.236 "strip_size_kb": 64, 00:14:52.236 "state": "configuring", 00:14:52.236 "raid_level": "raid5f", 00:14:52.236 "superblock": true, 00:14:52.236 "num_base_bdevs": 3, 00:14:52.236 "num_base_bdevs_discovered": 2, 00:14:52.236 "num_base_bdevs_operational": 3, 00:14:52.236 "base_bdevs_list": [ 00:14:52.236 { 00:14:52.236 "name": "BaseBdev1", 00:14:52.236 "uuid": "14600eba-ed0b-46b0-bdc6-ecdb7d8e2d2f", 00:14:52.236 "is_configured": true, 00:14:52.236 "data_offset": 2048, 00:14:52.236 "data_size": 63488 00:14:52.236 }, 00:14:52.236 { 00:14:52.236 "name": "BaseBdev2", 00:14:52.236 "uuid": "352b9d68-74aa-4b78-9dfd-09113126fcce", 00:14:52.236 "is_configured": true, 00:14:52.236 "data_offset": 2048, 00:14:52.236 "data_size": 63488 00:14:52.236 }, 00:14:52.236 { 00:14:52.236 "name": "BaseBdev3", 00:14:52.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.236 "is_configured": false, 00:14:52.236 "data_offset": 0, 00:14:52.236 "data_size": 0 00:14:52.236 } 00:14:52.236 ] 00:14:52.236 }' 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.236 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.516 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:52.516 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.516 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.516 [2024-11-27 11:53:18.853805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.516 [2024-11-27 11:53:18.854133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:52.516 [2024-11-27 11:53:18.854164] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:52.516 [2024-11-27 11:53:18.854466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:52.516 BaseBdev3 00:14:52.516 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.516 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:52.516 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:52.516 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.516 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:52.516 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.516 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.516 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.516 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.516 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.516 [2024-11-27 11:53:18.861005] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:52.516 [2024-11-27 11:53:18.861080] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:52.516 [2024-11-27 11:53:18.861313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.516 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.516 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:52.516 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.516 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.516 [ 00:14:52.516 { 00:14:52.516 "name": "BaseBdev3", 00:14:52.516 "aliases": [ 00:14:52.516 "49ebf235-bc79-4a91-8a8d-4f014e7c5a08" 00:14:52.516 ], 00:14:52.516 "product_name": "Malloc disk", 00:14:52.516 "block_size": 512, 00:14:52.516 "num_blocks": 65536, 00:14:52.799 "uuid": "49ebf235-bc79-4a91-8a8d-4f014e7c5a08", 00:14:52.799 "assigned_rate_limits": { 00:14:52.799 "rw_ios_per_sec": 0, 00:14:52.799 "rw_mbytes_per_sec": 0, 00:14:52.799 "r_mbytes_per_sec": 0, 00:14:52.799 "w_mbytes_per_sec": 0 00:14:52.799 }, 00:14:52.799 "claimed": true, 00:14:52.799 "claim_type": "exclusive_write", 00:14:52.799 "zoned": false, 00:14:52.799 "supported_io_types": { 00:14:52.799 "read": true, 00:14:52.799 "write": true, 00:14:52.799 "unmap": true, 00:14:52.799 "flush": true, 00:14:52.799 "reset": true, 00:14:52.799 "nvme_admin": false, 00:14:52.799 "nvme_io": false, 00:14:52.799 "nvme_io_md": false, 00:14:52.799 "write_zeroes": true, 00:14:52.799 "zcopy": true, 00:14:52.799 "get_zone_info": false, 00:14:52.799 "zone_management": false, 00:14:52.799 "zone_append": false, 00:14:52.799 "compare": false, 00:14:52.799 "compare_and_write": false, 00:14:52.799 "abort": true, 00:14:52.799 "seek_hole": false, 00:14:52.799 "seek_data": false, 00:14:52.799 "copy": true, 00:14:52.799 "nvme_iov_md": false 00:14:52.799 }, 00:14:52.799 "memory_domains": [ 00:14:52.799 { 00:14:52.799 "dma_device_id": "system", 00:14:52.799 "dma_device_type": 1 00:14:52.799 }, 00:14:52.799 { 00:14:52.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.799 "dma_device_type": 2 00:14:52.799 } 00:14:52.799 ], 00:14:52.799 "driver_specific": {} 00:14:52.799 } 00:14:52.799 ] 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.799 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.799 "name": "Existed_Raid", 00:14:52.800 "uuid": "1ce8bec3-a658-4a06-a61a-6c5a35943274", 00:14:52.800 "strip_size_kb": 64, 00:14:52.800 "state": "online", 00:14:52.800 "raid_level": "raid5f", 00:14:52.800 "superblock": true, 00:14:52.800 "num_base_bdevs": 3, 00:14:52.800 "num_base_bdevs_discovered": 3, 00:14:52.800 "num_base_bdevs_operational": 3, 00:14:52.800 "base_bdevs_list": [ 00:14:52.800 { 00:14:52.800 "name": "BaseBdev1", 00:14:52.800 "uuid": "14600eba-ed0b-46b0-bdc6-ecdb7d8e2d2f", 00:14:52.800 "is_configured": true, 00:14:52.800 "data_offset": 2048, 00:14:52.800 "data_size": 63488 00:14:52.800 }, 00:14:52.800 { 00:14:52.800 "name": "BaseBdev2", 00:14:52.800 "uuid": "352b9d68-74aa-4b78-9dfd-09113126fcce", 00:14:52.800 "is_configured": true, 00:14:52.800 "data_offset": 2048, 00:14:52.800 "data_size": 63488 00:14:52.800 }, 00:14:52.800 { 00:14:52.800 "name": "BaseBdev3", 00:14:52.800 "uuid": "49ebf235-bc79-4a91-8a8d-4f014e7c5a08", 00:14:52.800 "is_configured": true, 00:14:52.800 "data_offset": 2048, 00:14:52.800 "data_size": 63488 00:14:52.800 } 00:14:52.800 ] 00:14:52.800 }' 00:14:52.800 11:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.800 11:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.057 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:53.057 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:53.057 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:53.057 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:53.057 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:53.057 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:53.057 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:53.057 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:53.058 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.058 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.058 [2024-11-27 11:53:19.388234] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.058 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.058 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:53.058 "name": "Existed_Raid", 00:14:53.058 "aliases": [ 00:14:53.058 "1ce8bec3-a658-4a06-a61a-6c5a35943274" 00:14:53.058 ], 00:14:53.058 "product_name": "Raid Volume", 00:14:53.058 "block_size": 512, 00:14:53.058 "num_blocks": 126976, 00:14:53.058 "uuid": "1ce8bec3-a658-4a06-a61a-6c5a35943274", 00:14:53.058 "assigned_rate_limits": { 00:14:53.058 "rw_ios_per_sec": 0, 00:14:53.058 "rw_mbytes_per_sec": 0, 00:14:53.058 "r_mbytes_per_sec": 0, 00:14:53.058 "w_mbytes_per_sec": 0 00:14:53.058 }, 00:14:53.058 "claimed": false, 00:14:53.058 "zoned": false, 00:14:53.058 "supported_io_types": { 00:14:53.058 "read": true, 00:14:53.058 "write": true, 00:14:53.058 "unmap": false, 00:14:53.058 "flush": false, 00:14:53.058 "reset": true, 00:14:53.058 "nvme_admin": false, 00:14:53.058 "nvme_io": false, 00:14:53.058 "nvme_io_md": false, 00:14:53.058 "write_zeroes": true, 00:14:53.058 "zcopy": false, 00:14:53.058 "get_zone_info": false, 00:14:53.058 "zone_management": false, 00:14:53.058 "zone_append": false, 00:14:53.058 "compare": false, 00:14:53.058 "compare_and_write": false, 00:14:53.058 "abort": false, 00:14:53.058 "seek_hole": false, 00:14:53.058 "seek_data": false, 00:14:53.058 "copy": false, 00:14:53.058 "nvme_iov_md": false 00:14:53.058 }, 00:14:53.058 "driver_specific": { 00:14:53.058 "raid": { 00:14:53.058 "uuid": "1ce8bec3-a658-4a06-a61a-6c5a35943274", 00:14:53.058 "strip_size_kb": 64, 00:14:53.058 "state": "online", 00:14:53.058 "raid_level": "raid5f", 00:14:53.058 "superblock": true, 00:14:53.058 "num_base_bdevs": 3, 00:14:53.058 "num_base_bdevs_discovered": 3, 00:14:53.058 "num_base_bdevs_operational": 3, 00:14:53.058 "base_bdevs_list": [ 00:14:53.058 { 00:14:53.058 "name": "BaseBdev1", 00:14:53.058 "uuid": "14600eba-ed0b-46b0-bdc6-ecdb7d8e2d2f", 00:14:53.058 "is_configured": true, 00:14:53.058 "data_offset": 2048, 00:14:53.058 "data_size": 63488 00:14:53.058 }, 00:14:53.058 { 00:14:53.058 "name": "BaseBdev2", 00:14:53.058 "uuid": "352b9d68-74aa-4b78-9dfd-09113126fcce", 00:14:53.058 "is_configured": true, 00:14:53.058 "data_offset": 2048, 00:14:53.058 "data_size": 63488 00:14:53.058 }, 00:14:53.058 { 00:14:53.058 "name": "BaseBdev3", 00:14:53.058 "uuid": "49ebf235-bc79-4a91-8a8d-4f014e7c5a08", 00:14:53.058 "is_configured": true, 00:14:53.058 "data_offset": 2048, 00:14:53.058 "data_size": 63488 00:14:53.058 } 00:14:53.058 ] 00:14:53.058 } 00:14:53.058 } 00:14:53.058 }' 00:14:53.058 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.316 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:53.316 BaseBdev2 00:14:53.316 BaseBdev3' 00:14:53.316 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.316 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:53.316 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.316 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:53.316 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.316 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.316 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.317 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.317 [2024-11-27 11:53:19.667723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.575 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.575 "name": "Existed_Raid", 00:14:53.575 "uuid": "1ce8bec3-a658-4a06-a61a-6c5a35943274", 00:14:53.575 "strip_size_kb": 64, 00:14:53.575 "state": "online", 00:14:53.575 "raid_level": "raid5f", 00:14:53.575 "superblock": true, 00:14:53.575 "num_base_bdevs": 3, 00:14:53.575 "num_base_bdevs_discovered": 2, 00:14:53.575 "num_base_bdevs_operational": 2, 00:14:53.575 "base_bdevs_list": [ 00:14:53.575 { 00:14:53.575 "name": null, 00:14:53.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.575 "is_configured": false, 00:14:53.575 "data_offset": 0, 00:14:53.576 "data_size": 63488 00:14:53.576 }, 00:14:53.576 { 00:14:53.576 "name": "BaseBdev2", 00:14:53.576 "uuid": "352b9d68-74aa-4b78-9dfd-09113126fcce", 00:14:53.576 "is_configured": true, 00:14:53.576 "data_offset": 2048, 00:14:53.576 "data_size": 63488 00:14:53.576 }, 00:14:53.576 { 00:14:53.576 "name": "BaseBdev3", 00:14:53.576 "uuid": "49ebf235-bc79-4a91-8a8d-4f014e7c5a08", 00:14:53.576 "is_configured": true, 00:14:53.576 "data_offset": 2048, 00:14:53.576 "data_size": 63488 00:14:53.576 } 00:14:53.576 ] 00:14:53.576 }' 00:14:53.576 11:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.576 11:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.144 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:54.144 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.144 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:54.144 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.144 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.144 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.144 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.144 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:54.144 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.144 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:54.144 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.144 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.144 [2024-11-27 11:53:20.284657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:54.144 [2024-11-27 11:53:20.284829] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.144 [2024-11-27 11:53:20.404021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.144 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.145 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:54.145 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.145 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.145 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:54.145 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.145 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.145 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.145 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:54.145 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.145 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:54.145 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.145 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.145 [2024-11-27 11:53:20.460033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:54.145 [2024-11-27 11:53:20.460091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.403 BaseBdev2 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.403 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.403 [ 00:14:54.403 { 00:14:54.403 "name": "BaseBdev2", 00:14:54.403 "aliases": [ 00:14:54.403 "16ef3ebc-e84b-4375-9b3f-4d87cdf62df4" 00:14:54.403 ], 00:14:54.403 "product_name": "Malloc disk", 00:14:54.403 "block_size": 512, 00:14:54.403 "num_blocks": 65536, 00:14:54.403 "uuid": "16ef3ebc-e84b-4375-9b3f-4d87cdf62df4", 00:14:54.404 "assigned_rate_limits": { 00:14:54.404 "rw_ios_per_sec": 0, 00:14:54.404 "rw_mbytes_per_sec": 0, 00:14:54.404 "r_mbytes_per_sec": 0, 00:14:54.404 "w_mbytes_per_sec": 0 00:14:54.404 }, 00:14:54.404 "claimed": false, 00:14:54.404 "zoned": false, 00:14:54.404 "supported_io_types": { 00:14:54.404 "read": true, 00:14:54.404 "write": true, 00:14:54.404 "unmap": true, 00:14:54.404 "flush": true, 00:14:54.404 "reset": true, 00:14:54.404 "nvme_admin": false, 00:14:54.404 "nvme_io": false, 00:14:54.404 "nvme_io_md": false, 00:14:54.404 "write_zeroes": true, 00:14:54.404 "zcopy": true, 00:14:54.404 "get_zone_info": false, 00:14:54.404 "zone_management": false, 00:14:54.404 "zone_append": false, 00:14:54.404 "compare": false, 00:14:54.404 "compare_and_write": false, 00:14:54.404 "abort": true, 00:14:54.404 "seek_hole": false, 00:14:54.404 "seek_data": false, 00:14:54.404 "copy": true, 00:14:54.404 "nvme_iov_md": false 00:14:54.404 }, 00:14:54.404 "memory_domains": [ 00:14:54.404 { 00:14:54.404 "dma_device_id": "system", 00:14:54.404 "dma_device_type": 1 00:14:54.404 }, 00:14:54.404 { 00:14:54.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.404 "dma_device_type": 2 00:14:54.404 } 00:14:54.404 ], 00:14:54.404 "driver_specific": {} 00:14:54.404 } 00:14:54.404 ] 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.404 BaseBdev3 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.404 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.662 [ 00:14:54.662 { 00:14:54.662 "name": "BaseBdev3", 00:14:54.662 "aliases": [ 00:14:54.662 "3bfbd5c9-4f04-42a0-a40c-d89860339858" 00:14:54.662 ], 00:14:54.662 "product_name": "Malloc disk", 00:14:54.662 "block_size": 512, 00:14:54.662 "num_blocks": 65536, 00:14:54.662 "uuid": "3bfbd5c9-4f04-42a0-a40c-d89860339858", 00:14:54.662 "assigned_rate_limits": { 00:14:54.662 "rw_ios_per_sec": 0, 00:14:54.662 "rw_mbytes_per_sec": 0, 00:14:54.662 "r_mbytes_per_sec": 0, 00:14:54.662 "w_mbytes_per_sec": 0 00:14:54.662 }, 00:14:54.662 "claimed": false, 00:14:54.662 "zoned": false, 00:14:54.662 "supported_io_types": { 00:14:54.662 "read": true, 00:14:54.662 "write": true, 00:14:54.663 "unmap": true, 00:14:54.663 "flush": true, 00:14:54.663 "reset": true, 00:14:54.663 "nvme_admin": false, 00:14:54.663 "nvme_io": false, 00:14:54.663 "nvme_io_md": false, 00:14:54.663 "write_zeroes": true, 00:14:54.663 "zcopy": true, 00:14:54.663 "get_zone_info": false, 00:14:54.663 "zone_management": false, 00:14:54.663 "zone_append": false, 00:14:54.663 "compare": false, 00:14:54.663 "compare_and_write": false, 00:14:54.663 "abort": true, 00:14:54.663 "seek_hole": false, 00:14:54.663 "seek_data": false, 00:14:54.663 "copy": true, 00:14:54.663 "nvme_iov_md": false 00:14:54.663 }, 00:14:54.663 "memory_domains": [ 00:14:54.663 { 00:14:54.663 "dma_device_id": "system", 00:14:54.663 "dma_device_type": 1 00:14:54.663 }, 00:14:54.663 { 00:14:54.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.663 "dma_device_type": 2 00:14:54.663 } 00:14:54.663 ], 00:14:54.663 "driver_specific": {} 00:14:54.663 } 00:14:54.663 ] 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.663 [2024-11-27 11:53:20.815569] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.663 [2024-11-27 11:53:20.815699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.663 [2024-11-27 11:53:20.815760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:54.663 [2024-11-27 11:53:20.818071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.663 "name": "Existed_Raid", 00:14:54.663 "uuid": "de62973b-0e7c-42a4-b02c-e542aed51869", 00:14:54.663 "strip_size_kb": 64, 00:14:54.663 "state": "configuring", 00:14:54.663 "raid_level": "raid5f", 00:14:54.663 "superblock": true, 00:14:54.663 "num_base_bdevs": 3, 00:14:54.663 "num_base_bdevs_discovered": 2, 00:14:54.663 "num_base_bdevs_operational": 3, 00:14:54.663 "base_bdevs_list": [ 00:14:54.663 { 00:14:54.663 "name": "BaseBdev1", 00:14:54.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.663 "is_configured": false, 00:14:54.663 "data_offset": 0, 00:14:54.663 "data_size": 0 00:14:54.663 }, 00:14:54.663 { 00:14:54.663 "name": "BaseBdev2", 00:14:54.663 "uuid": "16ef3ebc-e84b-4375-9b3f-4d87cdf62df4", 00:14:54.663 "is_configured": true, 00:14:54.663 "data_offset": 2048, 00:14:54.663 "data_size": 63488 00:14:54.663 }, 00:14:54.663 { 00:14:54.663 "name": "BaseBdev3", 00:14:54.663 "uuid": "3bfbd5c9-4f04-42a0-a40c-d89860339858", 00:14:54.663 "is_configured": true, 00:14:54.663 "data_offset": 2048, 00:14:54.663 "data_size": 63488 00:14:54.663 } 00:14:54.663 ] 00:14:54.663 }' 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.663 11:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.921 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:54.921 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.921 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.921 [2024-11-27 11:53:21.258867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:54.921 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.921 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.921 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.921 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.921 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.921 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.921 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.921 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.921 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.921 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.921 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.921 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.921 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.921 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.922 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.922 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.180 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.180 "name": "Existed_Raid", 00:14:55.180 "uuid": "de62973b-0e7c-42a4-b02c-e542aed51869", 00:14:55.180 "strip_size_kb": 64, 00:14:55.180 "state": "configuring", 00:14:55.180 "raid_level": "raid5f", 00:14:55.180 "superblock": true, 00:14:55.180 "num_base_bdevs": 3, 00:14:55.180 "num_base_bdevs_discovered": 1, 00:14:55.180 "num_base_bdevs_operational": 3, 00:14:55.180 "base_bdevs_list": [ 00:14:55.180 { 00:14:55.180 "name": "BaseBdev1", 00:14:55.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.180 "is_configured": false, 00:14:55.180 "data_offset": 0, 00:14:55.180 "data_size": 0 00:14:55.180 }, 00:14:55.180 { 00:14:55.180 "name": null, 00:14:55.180 "uuid": "16ef3ebc-e84b-4375-9b3f-4d87cdf62df4", 00:14:55.180 "is_configured": false, 00:14:55.180 "data_offset": 0, 00:14:55.180 "data_size": 63488 00:14:55.180 }, 00:14:55.180 { 00:14:55.180 "name": "BaseBdev3", 00:14:55.180 "uuid": "3bfbd5c9-4f04-42a0-a40c-d89860339858", 00:14:55.180 "is_configured": true, 00:14:55.180 "data_offset": 2048, 00:14:55.180 "data_size": 63488 00:14:55.180 } 00:14:55.180 ] 00:14:55.180 }' 00:14:55.180 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.180 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.439 [2024-11-27 11:53:21.805051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.439 BaseBdev1 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.439 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.698 [ 00:14:55.698 { 00:14:55.698 "name": "BaseBdev1", 00:14:55.698 "aliases": [ 00:14:55.698 "ec11947f-062f-409d-b797-3d4b160f8966" 00:14:55.698 ], 00:14:55.698 "product_name": "Malloc disk", 00:14:55.698 "block_size": 512, 00:14:55.698 "num_blocks": 65536, 00:14:55.698 "uuid": "ec11947f-062f-409d-b797-3d4b160f8966", 00:14:55.698 "assigned_rate_limits": { 00:14:55.698 "rw_ios_per_sec": 0, 00:14:55.698 "rw_mbytes_per_sec": 0, 00:14:55.698 "r_mbytes_per_sec": 0, 00:14:55.698 "w_mbytes_per_sec": 0 00:14:55.698 }, 00:14:55.698 "claimed": true, 00:14:55.698 "claim_type": "exclusive_write", 00:14:55.698 "zoned": false, 00:14:55.698 "supported_io_types": { 00:14:55.698 "read": true, 00:14:55.698 "write": true, 00:14:55.698 "unmap": true, 00:14:55.698 "flush": true, 00:14:55.698 "reset": true, 00:14:55.698 "nvme_admin": false, 00:14:55.698 "nvme_io": false, 00:14:55.698 "nvme_io_md": false, 00:14:55.698 "write_zeroes": true, 00:14:55.698 "zcopy": true, 00:14:55.698 "get_zone_info": false, 00:14:55.698 "zone_management": false, 00:14:55.698 "zone_append": false, 00:14:55.698 "compare": false, 00:14:55.698 "compare_and_write": false, 00:14:55.698 "abort": true, 00:14:55.698 "seek_hole": false, 00:14:55.698 "seek_data": false, 00:14:55.698 "copy": true, 00:14:55.698 "nvme_iov_md": false 00:14:55.698 }, 00:14:55.698 "memory_domains": [ 00:14:55.698 { 00:14:55.698 "dma_device_id": "system", 00:14:55.698 "dma_device_type": 1 00:14:55.698 }, 00:14:55.698 { 00:14:55.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.698 "dma_device_type": 2 00:14:55.698 } 00:14:55.698 ], 00:14:55.698 "driver_specific": {} 00:14:55.698 } 00:14:55.698 ] 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.698 "name": "Existed_Raid", 00:14:55.698 "uuid": "de62973b-0e7c-42a4-b02c-e542aed51869", 00:14:55.698 "strip_size_kb": 64, 00:14:55.698 "state": "configuring", 00:14:55.698 "raid_level": "raid5f", 00:14:55.698 "superblock": true, 00:14:55.698 "num_base_bdevs": 3, 00:14:55.698 "num_base_bdevs_discovered": 2, 00:14:55.698 "num_base_bdevs_operational": 3, 00:14:55.698 "base_bdevs_list": [ 00:14:55.698 { 00:14:55.698 "name": "BaseBdev1", 00:14:55.698 "uuid": "ec11947f-062f-409d-b797-3d4b160f8966", 00:14:55.698 "is_configured": true, 00:14:55.698 "data_offset": 2048, 00:14:55.698 "data_size": 63488 00:14:55.698 }, 00:14:55.698 { 00:14:55.698 "name": null, 00:14:55.698 "uuid": "16ef3ebc-e84b-4375-9b3f-4d87cdf62df4", 00:14:55.698 "is_configured": false, 00:14:55.698 "data_offset": 0, 00:14:55.698 "data_size": 63488 00:14:55.698 }, 00:14:55.698 { 00:14:55.698 "name": "BaseBdev3", 00:14:55.698 "uuid": "3bfbd5c9-4f04-42a0-a40c-d89860339858", 00:14:55.698 "is_configured": true, 00:14:55.698 "data_offset": 2048, 00:14:55.698 "data_size": 63488 00:14:55.698 } 00:14:55.698 ] 00:14:55.698 }' 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.698 11:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.956 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:55.956 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.956 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.956 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.215 [2024-11-27 11:53:22.380294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.215 "name": "Existed_Raid", 00:14:56.215 "uuid": "de62973b-0e7c-42a4-b02c-e542aed51869", 00:14:56.215 "strip_size_kb": 64, 00:14:56.215 "state": "configuring", 00:14:56.215 "raid_level": "raid5f", 00:14:56.215 "superblock": true, 00:14:56.215 "num_base_bdevs": 3, 00:14:56.215 "num_base_bdevs_discovered": 1, 00:14:56.215 "num_base_bdevs_operational": 3, 00:14:56.215 "base_bdevs_list": [ 00:14:56.215 { 00:14:56.215 "name": "BaseBdev1", 00:14:56.215 "uuid": "ec11947f-062f-409d-b797-3d4b160f8966", 00:14:56.215 "is_configured": true, 00:14:56.215 "data_offset": 2048, 00:14:56.215 "data_size": 63488 00:14:56.215 }, 00:14:56.215 { 00:14:56.215 "name": null, 00:14:56.215 "uuid": "16ef3ebc-e84b-4375-9b3f-4d87cdf62df4", 00:14:56.215 "is_configured": false, 00:14:56.215 "data_offset": 0, 00:14:56.215 "data_size": 63488 00:14:56.215 }, 00:14:56.215 { 00:14:56.215 "name": null, 00:14:56.215 "uuid": "3bfbd5c9-4f04-42a0-a40c-d89860339858", 00:14:56.215 "is_configured": false, 00:14:56.215 "data_offset": 0, 00:14:56.215 "data_size": 63488 00:14:56.215 } 00:14:56.215 ] 00:14:56.215 }' 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.215 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.784 [2024-11-27 11:53:22.911526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.784 "name": "Existed_Raid", 00:14:56.784 "uuid": "de62973b-0e7c-42a4-b02c-e542aed51869", 00:14:56.784 "strip_size_kb": 64, 00:14:56.784 "state": "configuring", 00:14:56.784 "raid_level": "raid5f", 00:14:56.784 "superblock": true, 00:14:56.784 "num_base_bdevs": 3, 00:14:56.784 "num_base_bdevs_discovered": 2, 00:14:56.784 "num_base_bdevs_operational": 3, 00:14:56.784 "base_bdevs_list": [ 00:14:56.784 { 00:14:56.784 "name": "BaseBdev1", 00:14:56.784 "uuid": "ec11947f-062f-409d-b797-3d4b160f8966", 00:14:56.784 "is_configured": true, 00:14:56.784 "data_offset": 2048, 00:14:56.784 "data_size": 63488 00:14:56.784 }, 00:14:56.784 { 00:14:56.784 "name": null, 00:14:56.784 "uuid": "16ef3ebc-e84b-4375-9b3f-4d87cdf62df4", 00:14:56.784 "is_configured": false, 00:14:56.784 "data_offset": 0, 00:14:56.784 "data_size": 63488 00:14:56.784 }, 00:14:56.784 { 00:14:56.784 "name": "BaseBdev3", 00:14:56.784 "uuid": "3bfbd5c9-4f04-42a0-a40c-d89860339858", 00:14:56.784 "is_configured": true, 00:14:56.784 "data_offset": 2048, 00:14:56.784 "data_size": 63488 00:14:56.784 } 00:14:56.784 ] 00:14:56.784 }' 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.784 11:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.352 [2024-11-27 11:53:23.486676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.352 "name": "Existed_Raid", 00:14:57.352 "uuid": "de62973b-0e7c-42a4-b02c-e542aed51869", 00:14:57.352 "strip_size_kb": 64, 00:14:57.352 "state": "configuring", 00:14:57.352 "raid_level": "raid5f", 00:14:57.352 "superblock": true, 00:14:57.352 "num_base_bdevs": 3, 00:14:57.352 "num_base_bdevs_discovered": 1, 00:14:57.352 "num_base_bdevs_operational": 3, 00:14:57.352 "base_bdevs_list": [ 00:14:57.352 { 00:14:57.352 "name": null, 00:14:57.352 "uuid": "ec11947f-062f-409d-b797-3d4b160f8966", 00:14:57.352 "is_configured": false, 00:14:57.352 "data_offset": 0, 00:14:57.352 "data_size": 63488 00:14:57.352 }, 00:14:57.352 { 00:14:57.352 "name": null, 00:14:57.352 "uuid": "16ef3ebc-e84b-4375-9b3f-4d87cdf62df4", 00:14:57.352 "is_configured": false, 00:14:57.352 "data_offset": 0, 00:14:57.352 "data_size": 63488 00:14:57.352 }, 00:14:57.352 { 00:14:57.352 "name": "BaseBdev3", 00:14:57.352 "uuid": "3bfbd5c9-4f04-42a0-a40c-d89860339858", 00:14:57.352 "is_configured": true, 00:14:57.352 "data_offset": 2048, 00:14:57.352 "data_size": 63488 00:14:57.352 } 00:14:57.352 ] 00:14:57.352 }' 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.352 11:53:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.918 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:57.918 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.919 [2024-11-27 11:53:24.072680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.919 "name": "Existed_Raid", 00:14:57.919 "uuid": "de62973b-0e7c-42a4-b02c-e542aed51869", 00:14:57.919 "strip_size_kb": 64, 00:14:57.919 "state": "configuring", 00:14:57.919 "raid_level": "raid5f", 00:14:57.919 "superblock": true, 00:14:57.919 "num_base_bdevs": 3, 00:14:57.919 "num_base_bdevs_discovered": 2, 00:14:57.919 "num_base_bdevs_operational": 3, 00:14:57.919 "base_bdevs_list": [ 00:14:57.919 { 00:14:57.919 "name": null, 00:14:57.919 "uuid": "ec11947f-062f-409d-b797-3d4b160f8966", 00:14:57.919 "is_configured": false, 00:14:57.919 "data_offset": 0, 00:14:57.919 "data_size": 63488 00:14:57.919 }, 00:14:57.919 { 00:14:57.919 "name": "BaseBdev2", 00:14:57.919 "uuid": "16ef3ebc-e84b-4375-9b3f-4d87cdf62df4", 00:14:57.919 "is_configured": true, 00:14:57.919 "data_offset": 2048, 00:14:57.919 "data_size": 63488 00:14:57.919 }, 00:14:57.919 { 00:14:57.919 "name": "BaseBdev3", 00:14:57.919 "uuid": "3bfbd5c9-4f04-42a0-a40c-d89860339858", 00:14:57.919 "is_configured": true, 00:14:57.919 "data_offset": 2048, 00:14:57.919 "data_size": 63488 00:14:57.919 } 00:14:57.919 ] 00:14:57.919 }' 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.919 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.177 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:58.177 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.177 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.177 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ec11947f-062f-409d-b797-3d4b160f8966 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.436 [2024-11-27 11:53:24.699996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:58.436 [2024-11-27 11:53:24.700360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:58.436 [2024-11-27 11:53:24.700421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:58.436 [2024-11-27 11:53:24.700721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:58.436 NewBaseBdev 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.436 [2024-11-27 11:53:24.707802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:58.436 [2024-11-27 11:53:24.707827] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:58.436 [2024-11-27 11:53:24.708148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.436 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.436 [ 00:14:58.436 { 00:14:58.436 "name": "NewBaseBdev", 00:14:58.436 "aliases": [ 00:14:58.436 "ec11947f-062f-409d-b797-3d4b160f8966" 00:14:58.436 ], 00:14:58.436 "product_name": "Malloc disk", 00:14:58.436 "block_size": 512, 00:14:58.436 "num_blocks": 65536, 00:14:58.436 "uuid": "ec11947f-062f-409d-b797-3d4b160f8966", 00:14:58.436 "assigned_rate_limits": { 00:14:58.436 "rw_ios_per_sec": 0, 00:14:58.436 "rw_mbytes_per_sec": 0, 00:14:58.436 "r_mbytes_per_sec": 0, 00:14:58.436 "w_mbytes_per_sec": 0 00:14:58.436 }, 00:14:58.436 "claimed": true, 00:14:58.436 "claim_type": "exclusive_write", 00:14:58.436 "zoned": false, 00:14:58.436 "supported_io_types": { 00:14:58.436 "read": true, 00:14:58.436 "write": true, 00:14:58.436 "unmap": true, 00:14:58.436 "flush": true, 00:14:58.436 "reset": true, 00:14:58.436 "nvme_admin": false, 00:14:58.436 "nvme_io": false, 00:14:58.436 "nvme_io_md": false, 00:14:58.436 "write_zeroes": true, 00:14:58.437 "zcopy": true, 00:14:58.437 "get_zone_info": false, 00:14:58.437 "zone_management": false, 00:14:58.437 "zone_append": false, 00:14:58.437 "compare": false, 00:14:58.437 "compare_and_write": false, 00:14:58.437 "abort": true, 00:14:58.437 "seek_hole": false, 00:14:58.437 "seek_data": false, 00:14:58.437 "copy": true, 00:14:58.437 "nvme_iov_md": false 00:14:58.437 }, 00:14:58.437 "memory_domains": [ 00:14:58.437 { 00:14:58.437 "dma_device_id": "system", 00:14:58.437 "dma_device_type": 1 00:14:58.437 }, 00:14:58.437 { 00:14:58.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.437 "dma_device_type": 2 00:14:58.437 } 00:14:58.437 ], 00:14:58.437 "driver_specific": {} 00:14:58.437 } 00:14:58.437 ] 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.437 "name": "Existed_Raid", 00:14:58.437 "uuid": "de62973b-0e7c-42a4-b02c-e542aed51869", 00:14:58.437 "strip_size_kb": 64, 00:14:58.437 "state": "online", 00:14:58.437 "raid_level": "raid5f", 00:14:58.437 "superblock": true, 00:14:58.437 "num_base_bdevs": 3, 00:14:58.437 "num_base_bdevs_discovered": 3, 00:14:58.437 "num_base_bdevs_operational": 3, 00:14:58.437 "base_bdevs_list": [ 00:14:58.437 { 00:14:58.437 "name": "NewBaseBdev", 00:14:58.437 "uuid": "ec11947f-062f-409d-b797-3d4b160f8966", 00:14:58.437 "is_configured": true, 00:14:58.437 "data_offset": 2048, 00:14:58.437 "data_size": 63488 00:14:58.437 }, 00:14:58.437 { 00:14:58.437 "name": "BaseBdev2", 00:14:58.437 "uuid": "16ef3ebc-e84b-4375-9b3f-4d87cdf62df4", 00:14:58.437 "is_configured": true, 00:14:58.437 "data_offset": 2048, 00:14:58.437 "data_size": 63488 00:14:58.437 }, 00:14:58.437 { 00:14:58.437 "name": "BaseBdev3", 00:14:58.437 "uuid": "3bfbd5c9-4f04-42a0-a40c-d89860339858", 00:14:58.437 "is_configured": true, 00:14:58.437 "data_offset": 2048, 00:14:58.437 "data_size": 63488 00:14:58.437 } 00:14:58.437 ] 00:14:58.437 }' 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.437 11:53:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.005 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:59.005 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:59.005 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:59.005 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:59.005 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:59.005 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:59.005 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:59.005 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.005 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:59.005 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.005 [2024-11-27 11:53:25.263713] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.005 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.005 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:59.005 "name": "Existed_Raid", 00:14:59.005 "aliases": [ 00:14:59.005 "de62973b-0e7c-42a4-b02c-e542aed51869" 00:14:59.005 ], 00:14:59.005 "product_name": "Raid Volume", 00:14:59.005 "block_size": 512, 00:14:59.005 "num_blocks": 126976, 00:14:59.005 "uuid": "de62973b-0e7c-42a4-b02c-e542aed51869", 00:14:59.005 "assigned_rate_limits": { 00:14:59.005 "rw_ios_per_sec": 0, 00:14:59.005 "rw_mbytes_per_sec": 0, 00:14:59.005 "r_mbytes_per_sec": 0, 00:14:59.005 "w_mbytes_per_sec": 0 00:14:59.005 }, 00:14:59.005 "claimed": false, 00:14:59.005 "zoned": false, 00:14:59.005 "supported_io_types": { 00:14:59.005 "read": true, 00:14:59.005 "write": true, 00:14:59.005 "unmap": false, 00:14:59.005 "flush": false, 00:14:59.005 "reset": true, 00:14:59.005 "nvme_admin": false, 00:14:59.005 "nvme_io": false, 00:14:59.005 "nvme_io_md": false, 00:14:59.005 "write_zeroes": true, 00:14:59.005 "zcopy": false, 00:14:59.005 "get_zone_info": false, 00:14:59.005 "zone_management": false, 00:14:59.005 "zone_append": false, 00:14:59.005 "compare": false, 00:14:59.005 "compare_and_write": false, 00:14:59.005 "abort": false, 00:14:59.005 "seek_hole": false, 00:14:59.005 "seek_data": false, 00:14:59.005 "copy": false, 00:14:59.005 "nvme_iov_md": false 00:14:59.005 }, 00:14:59.005 "driver_specific": { 00:14:59.005 "raid": { 00:14:59.005 "uuid": "de62973b-0e7c-42a4-b02c-e542aed51869", 00:14:59.005 "strip_size_kb": 64, 00:14:59.005 "state": "online", 00:14:59.005 "raid_level": "raid5f", 00:14:59.005 "superblock": true, 00:14:59.005 "num_base_bdevs": 3, 00:14:59.005 "num_base_bdevs_discovered": 3, 00:14:59.005 "num_base_bdevs_operational": 3, 00:14:59.005 "base_bdevs_list": [ 00:14:59.005 { 00:14:59.005 "name": "NewBaseBdev", 00:14:59.005 "uuid": "ec11947f-062f-409d-b797-3d4b160f8966", 00:14:59.005 "is_configured": true, 00:14:59.005 "data_offset": 2048, 00:14:59.005 "data_size": 63488 00:14:59.005 }, 00:14:59.005 { 00:14:59.005 "name": "BaseBdev2", 00:14:59.005 "uuid": "16ef3ebc-e84b-4375-9b3f-4d87cdf62df4", 00:14:59.005 "is_configured": true, 00:14:59.005 "data_offset": 2048, 00:14:59.005 "data_size": 63488 00:14:59.005 }, 00:14:59.005 { 00:14:59.005 "name": "BaseBdev3", 00:14:59.005 "uuid": "3bfbd5c9-4f04-42a0-a40c-d89860339858", 00:14:59.005 "is_configured": true, 00:14:59.005 "data_offset": 2048, 00:14:59.005 "data_size": 63488 00:14:59.005 } 00:14:59.005 ] 00:14:59.005 } 00:14:59.005 } 00:14:59.005 }' 00:14:59.005 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:59.005 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:59.005 BaseBdev2 00:14:59.005 BaseBdev3' 00:14:59.005 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.265 [2024-11-27 11:53:25.566963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:59.265 [2024-11-27 11:53:25.567047] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.265 [2024-11-27 11:53:25.567161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.265 [2024-11-27 11:53:25.567505] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.265 [2024-11-27 11:53:25.567582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80535 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80535 ']' 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80535 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80535 00:14:59.265 killing process with pid 80535 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80535' 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80535 00:14:59.265 [2024-11-27 11:53:25.616699] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:59.265 11:53:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80535 00:14:59.833 [2024-11-27 11:53:25.986089] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:01.212 11:53:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:01.212 00:15:01.212 real 0m11.559s 00:15:01.212 user 0m18.236s 00:15:01.212 sys 0m2.080s 00:15:01.212 11:53:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.212 11:53:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.212 ************************************ 00:15:01.212 END TEST raid5f_state_function_test_sb 00:15:01.212 ************************************ 00:15:01.212 11:53:27 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:01.212 11:53:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:01.212 11:53:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.212 11:53:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:01.212 ************************************ 00:15:01.212 START TEST raid5f_superblock_test 00:15:01.212 ************************************ 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81171 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81171 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81171 ']' 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.212 11:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.212 [2024-11-27 11:53:27.446563] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:15:01.212 [2024-11-27 11:53:27.446772] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81171 ] 00:15:01.470 [2024-11-27 11:53:27.614163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.471 [2024-11-27 11:53:27.750905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.730 [2024-11-27 11:53:27.992403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.730 [2024-11-27 11:53:27.992482] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.298 malloc1 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.298 [2024-11-27 11:53:28.453128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:02.298 [2024-11-27 11:53:28.453316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.298 [2024-11-27 11:53:28.453352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:02.298 [2024-11-27 11:53:28.453367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.298 [2024-11-27 11:53:28.456007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.298 [2024-11-27 11:53:28.456055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:02.298 pt1 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.298 malloc2 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.298 [2024-11-27 11:53:28.514580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:02.298 [2024-11-27 11:53:28.514659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.298 [2024-11-27 11:53:28.514689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:02.298 [2024-11-27 11:53:28.514702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.298 [2024-11-27 11:53:28.517201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.298 [2024-11-27 11:53:28.517297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:02.298 pt2 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.298 malloc3 00:15:02.298 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.299 [2024-11-27 11:53:28.592038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:02.299 [2024-11-27 11:53:28.592108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.299 [2024-11-27 11:53:28.592134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:02.299 [2024-11-27 11:53:28.592145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.299 [2024-11-27 11:53:28.594499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.299 [2024-11-27 11:53:28.594543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:02.299 pt3 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.299 [2024-11-27 11:53:28.604080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:02.299 [2024-11-27 11:53:28.606192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:02.299 [2024-11-27 11:53:28.606315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:02.299 [2024-11-27 11:53:28.606522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:02.299 [2024-11-27 11:53:28.606585] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:02.299 [2024-11-27 11:53:28.606886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:02.299 [2024-11-27 11:53:28.613584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:02.299 [2024-11-27 11:53:28.613646] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:02.299 [2024-11-27 11:53:28.613929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.299 "name": "raid_bdev1", 00:15:02.299 "uuid": "af5c442f-cd11-4a58-b09f-c69150bdc3b4", 00:15:02.299 "strip_size_kb": 64, 00:15:02.299 "state": "online", 00:15:02.299 "raid_level": "raid5f", 00:15:02.299 "superblock": true, 00:15:02.299 "num_base_bdevs": 3, 00:15:02.299 "num_base_bdevs_discovered": 3, 00:15:02.299 "num_base_bdevs_operational": 3, 00:15:02.299 "base_bdevs_list": [ 00:15:02.299 { 00:15:02.299 "name": "pt1", 00:15:02.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.299 "is_configured": true, 00:15:02.299 "data_offset": 2048, 00:15:02.299 "data_size": 63488 00:15:02.299 }, 00:15:02.299 { 00:15:02.299 "name": "pt2", 00:15:02.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.299 "is_configured": true, 00:15:02.299 "data_offset": 2048, 00:15:02.299 "data_size": 63488 00:15:02.299 }, 00:15:02.299 { 00:15:02.299 "name": "pt3", 00:15:02.299 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.299 "is_configured": true, 00:15:02.299 "data_offset": 2048, 00:15:02.299 "data_size": 63488 00:15:02.299 } 00:15:02.299 ] 00:15:02.299 }' 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.299 11:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.866 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.867 [2024-11-27 11:53:29.113179] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:02.867 "name": "raid_bdev1", 00:15:02.867 "aliases": [ 00:15:02.867 "af5c442f-cd11-4a58-b09f-c69150bdc3b4" 00:15:02.867 ], 00:15:02.867 "product_name": "Raid Volume", 00:15:02.867 "block_size": 512, 00:15:02.867 "num_blocks": 126976, 00:15:02.867 "uuid": "af5c442f-cd11-4a58-b09f-c69150bdc3b4", 00:15:02.867 "assigned_rate_limits": { 00:15:02.867 "rw_ios_per_sec": 0, 00:15:02.867 "rw_mbytes_per_sec": 0, 00:15:02.867 "r_mbytes_per_sec": 0, 00:15:02.867 "w_mbytes_per_sec": 0 00:15:02.867 }, 00:15:02.867 "claimed": false, 00:15:02.867 "zoned": false, 00:15:02.867 "supported_io_types": { 00:15:02.867 "read": true, 00:15:02.867 "write": true, 00:15:02.867 "unmap": false, 00:15:02.867 "flush": false, 00:15:02.867 "reset": true, 00:15:02.867 "nvme_admin": false, 00:15:02.867 "nvme_io": false, 00:15:02.867 "nvme_io_md": false, 00:15:02.867 "write_zeroes": true, 00:15:02.867 "zcopy": false, 00:15:02.867 "get_zone_info": false, 00:15:02.867 "zone_management": false, 00:15:02.867 "zone_append": false, 00:15:02.867 "compare": false, 00:15:02.867 "compare_and_write": false, 00:15:02.867 "abort": false, 00:15:02.867 "seek_hole": false, 00:15:02.867 "seek_data": false, 00:15:02.867 "copy": false, 00:15:02.867 "nvme_iov_md": false 00:15:02.867 }, 00:15:02.867 "driver_specific": { 00:15:02.867 "raid": { 00:15:02.867 "uuid": "af5c442f-cd11-4a58-b09f-c69150bdc3b4", 00:15:02.867 "strip_size_kb": 64, 00:15:02.867 "state": "online", 00:15:02.867 "raid_level": "raid5f", 00:15:02.867 "superblock": true, 00:15:02.867 "num_base_bdevs": 3, 00:15:02.867 "num_base_bdevs_discovered": 3, 00:15:02.867 "num_base_bdevs_operational": 3, 00:15:02.867 "base_bdevs_list": [ 00:15:02.867 { 00:15:02.867 "name": "pt1", 00:15:02.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.867 "is_configured": true, 00:15:02.867 "data_offset": 2048, 00:15:02.867 "data_size": 63488 00:15:02.867 }, 00:15:02.867 { 00:15:02.867 "name": "pt2", 00:15:02.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.867 "is_configured": true, 00:15:02.867 "data_offset": 2048, 00:15:02.867 "data_size": 63488 00:15:02.867 }, 00:15:02.867 { 00:15:02.867 "name": "pt3", 00:15:02.867 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.867 "is_configured": true, 00:15:02.867 "data_offset": 2048, 00:15:02.867 "data_size": 63488 00:15:02.867 } 00:15:02.867 ] 00:15:02.867 } 00:15:02.867 } 00:15:02.867 }' 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:02.867 pt2 00:15:02.867 pt3' 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.867 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.126 [2024-11-27 11:53:29.356709] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=af5c442f-cd11-4a58-b09f-c69150bdc3b4 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z af5c442f-cd11-4a58-b09f-c69150bdc3b4 ']' 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.126 [2024-11-27 11:53:29.400406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:03.126 [2024-11-27 11:53:29.400508] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.126 [2024-11-27 11:53:29.400635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.126 [2024-11-27 11:53:29.400754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.126 [2024-11-27 11:53:29.400810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.126 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.127 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:03.127 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:03.127 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.127 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.386 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.386 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.387 [2024-11-27 11:53:29.524255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:03.387 [2024-11-27 11:53:29.526520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:03.387 [2024-11-27 11:53:29.526643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:03.387 [2024-11-27 11:53:29.526717] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:03.387 [2024-11-27 11:53:29.526849] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:03.387 [2024-11-27 11:53:29.526933] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:03.387 [2024-11-27 11:53:29.527015] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:03.387 [2024-11-27 11:53:29.527052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:03.387 request: 00:15:03.387 { 00:15:03.387 "name": "raid_bdev1", 00:15:03.387 "raid_level": "raid5f", 00:15:03.387 "base_bdevs": [ 00:15:03.387 "malloc1", 00:15:03.387 "malloc2", 00:15:03.387 "malloc3" 00:15:03.387 ], 00:15:03.387 "strip_size_kb": 64, 00:15:03.387 "superblock": false, 00:15:03.387 "method": "bdev_raid_create", 00:15:03.387 "req_id": 1 00:15:03.387 } 00:15:03.387 Got JSON-RPC error response 00:15:03.387 response: 00:15:03.387 { 00:15:03.387 "code": -17, 00:15:03.387 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:03.387 } 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.387 [2024-11-27 11:53:29.608068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:03.387 [2024-11-27 11:53:29.608140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.387 [2024-11-27 11:53:29.608166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:03.387 [2024-11-27 11:53:29.608177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.387 [2024-11-27 11:53:29.610759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.387 [2024-11-27 11:53:29.610802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:03.387 [2024-11-27 11:53:29.610916] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:03.387 [2024-11-27 11:53:29.610981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:03.387 pt1 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.387 "name": "raid_bdev1", 00:15:03.387 "uuid": "af5c442f-cd11-4a58-b09f-c69150bdc3b4", 00:15:03.387 "strip_size_kb": 64, 00:15:03.387 "state": "configuring", 00:15:03.387 "raid_level": "raid5f", 00:15:03.387 "superblock": true, 00:15:03.387 "num_base_bdevs": 3, 00:15:03.387 "num_base_bdevs_discovered": 1, 00:15:03.387 "num_base_bdevs_operational": 3, 00:15:03.387 "base_bdevs_list": [ 00:15:03.387 { 00:15:03.387 "name": "pt1", 00:15:03.387 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.387 "is_configured": true, 00:15:03.387 "data_offset": 2048, 00:15:03.387 "data_size": 63488 00:15:03.387 }, 00:15:03.387 { 00:15:03.387 "name": null, 00:15:03.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.387 "is_configured": false, 00:15:03.387 "data_offset": 2048, 00:15:03.387 "data_size": 63488 00:15:03.387 }, 00:15:03.387 { 00:15:03.387 "name": null, 00:15:03.387 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.387 "is_configured": false, 00:15:03.387 "data_offset": 2048, 00:15:03.387 "data_size": 63488 00:15:03.387 } 00:15:03.387 ] 00:15:03.387 }' 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.387 11:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.956 [2024-11-27 11:53:30.087332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:03.956 [2024-11-27 11:53:30.087420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.956 [2024-11-27 11:53:30.087445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:03.956 [2024-11-27 11:53:30.087457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.956 [2024-11-27 11:53:30.087997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.956 [2024-11-27 11:53:30.088029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:03.956 [2024-11-27 11:53:30.088129] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:03.956 [2024-11-27 11:53:30.088160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:03.956 pt2 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.956 [2024-11-27 11:53:30.099326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.956 "name": "raid_bdev1", 00:15:03.956 "uuid": "af5c442f-cd11-4a58-b09f-c69150bdc3b4", 00:15:03.956 "strip_size_kb": 64, 00:15:03.956 "state": "configuring", 00:15:03.956 "raid_level": "raid5f", 00:15:03.956 "superblock": true, 00:15:03.956 "num_base_bdevs": 3, 00:15:03.956 "num_base_bdevs_discovered": 1, 00:15:03.956 "num_base_bdevs_operational": 3, 00:15:03.956 "base_bdevs_list": [ 00:15:03.956 { 00:15:03.956 "name": "pt1", 00:15:03.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.956 "is_configured": true, 00:15:03.956 "data_offset": 2048, 00:15:03.956 "data_size": 63488 00:15:03.956 }, 00:15:03.956 { 00:15:03.956 "name": null, 00:15:03.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.956 "is_configured": false, 00:15:03.956 "data_offset": 0, 00:15:03.956 "data_size": 63488 00:15:03.956 }, 00:15:03.956 { 00:15:03.956 "name": null, 00:15:03.956 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.956 "is_configured": false, 00:15:03.956 "data_offset": 2048, 00:15:03.956 "data_size": 63488 00:15:03.956 } 00:15:03.956 ] 00:15:03.956 }' 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.956 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.215 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:04.215 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:04.215 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.216 [2024-11-27 11:53:30.562561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:04.216 [2024-11-27 11:53:30.562654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.216 [2024-11-27 11:53:30.562676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:04.216 [2024-11-27 11:53:30.562690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.216 [2024-11-27 11:53:30.563243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.216 [2024-11-27 11:53:30.563270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:04.216 [2024-11-27 11:53:30.563365] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:04.216 [2024-11-27 11:53:30.563392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:04.216 pt2 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.216 [2024-11-27 11:53:30.574526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:04.216 [2024-11-27 11:53:30.574586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.216 [2024-11-27 11:53:30.574603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:04.216 [2024-11-27 11:53:30.574616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.216 [2024-11-27 11:53:30.575102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.216 [2024-11-27 11:53:30.575131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:04.216 [2024-11-27 11:53:30.575211] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:04.216 [2024-11-27 11:53:30.575235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:04.216 [2024-11-27 11:53:30.575403] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:04.216 [2024-11-27 11:53:30.575428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:04.216 [2024-11-27 11:53:30.575720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:04.216 [2024-11-27 11:53:30.582563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:04.216 pt3 00:15:04.216 [2024-11-27 11:53:30.582649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:04.216 [2024-11-27 11:53:30.582876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.216 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.477 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.477 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.477 "name": "raid_bdev1", 00:15:04.477 "uuid": "af5c442f-cd11-4a58-b09f-c69150bdc3b4", 00:15:04.477 "strip_size_kb": 64, 00:15:04.477 "state": "online", 00:15:04.477 "raid_level": "raid5f", 00:15:04.477 "superblock": true, 00:15:04.477 "num_base_bdevs": 3, 00:15:04.477 "num_base_bdevs_discovered": 3, 00:15:04.477 "num_base_bdevs_operational": 3, 00:15:04.477 "base_bdevs_list": [ 00:15:04.477 { 00:15:04.477 "name": "pt1", 00:15:04.477 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.477 "is_configured": true, 00:15:04.477 "data_offset": 2048, 00:15:04.477 "data_size": 63488 00:15:04.477 }, 00:15:04.477 { 00:15:04.477 "name": "pt2", 00:15:04.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.477 "is_configured": true, 00:15:04.477 "data_offset": 2048, 00:15:04.477 "data_size": 63488 00:15:04.477 }, 00:15:04.477 { 00:15:04.477 "name": "pt3", 00:15:04.477 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.477 "is_configured": true, 00:15:04.477 "data_offset": 2048, 00:15:04.477 "data_size": 63488 00:15:04.477 } 00:15:04.477 ] 00:15:04.477 }' 00:15:04.477 11:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.477 11:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.736 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:04.736 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:04.736 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:04.736 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:04.736 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:04.736 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:04.736 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.736 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:04.736 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.736 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.736 [2024-11-27 11:53:31.065966] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.736 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.736 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:04.736 "name": "raid_bdev1", 00:15:04.736 "aliases": [ 00:15:04.736 "af5c442f-cd11-4a58-b09f-c69150bdc3b4" 00:15:04.736 ], 00:15:04.736 "product_name": "Raid Volume", 00:15:04.736 "block_size": 512, 00:15:04.736 "num_blocks": 126976, 00:15:04.736 "uuid": "af5c442f-cd11-4a58-b09f-c69150bdc3b4", 00:15:04.736 "assigned_rate_limits": { 00:15:04.736 "rw_ios_per_sec": 0, 00:15:04.736 "rw_mbytes_per_sec": 0, 00:15:04.736 "r_mbytes_per_sec": 0, 00:15:04.736 "w_mbytes_per_sec": 0 00:15:04.736 }, 00:15:04.736 "claimed": false, 00:15:04.736 "zoned": false, 00:15:04.736 "supported_io_types": { 00:15:04.736 "read": true, 00:15:04.736 "write": true, 00:15:04.736 "unmap": false, 00:15:04.736 "flush": false, 00:15:04.736 "reset": true, 00:15:04.736 "nvme_admin": false, 00:15:04.736 "nvme_io": false, 00:15:04.736 "nvme_io_md": false, 00:15:04.736 "write_zeroes": true, 00:15:04.736 "zcopy": false, 00:15:04.736 "get_zone_info": false, 00:15:04.736 "zone_management": false, 00:15:04.736 "zone_append": false, 00:15:04.736 "compare": false, 00:15:04.737 "compare_and_write": false, 00:15:04.737 "abort": false, 00:15:04.737 "seek_hole": false, 00:15:04.737 "seek_data": false, 00:15:04.737 "copy": false, 00:15:04.737 "nvme_iov_md": false 00:15:04.737 }, 00:15:04.737 "driver_specific": { 00:15:04.737 "raid": { 00:15:04.737 "uuid": "af5c442f-cd11-4a58-b09f-c69150bdc3b4", 00:15:04.737 "strip_size_kb": 64, 00:15:04.737 "state": "online", 00:15:04.737 "raid_level": "raid5f", 00:15:04.737 "superblock": true, 00:15:04.737 "num_base_bdevs": 3, 00:15:04.737 "num_base_bdevs_discovered": 3, 00:15:04.737 "num_base_bdevs_operational": 3, 00:15:04.737 "base_bdevs_list": [ 00:15:04.737 { 00:15:04.737 "name": "pt1", 00:15:04.737 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.737 "is_configured": true, 00:15:04.737 "data_offset": 2048, 00:15:04.737 "data_size": 63488 00:15:04.737 }, 00:15:04.737 { 00:15:04.737 "name": "pt2", 00:15:04.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.737 "is_configured": true, 00:15:04.737 "data_offset": 2048, 00:15:04.737 "data_size": 63488 00:15:04.737 }, 00:15:04.737 { 00:15:04.737 "name": "pt3", 00:15:04.737 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.737 "is_configured": true, 00:15:04.737 "data_offset": 2048, 00:15:04.737 "data_size": 63488 00:15:04.737 } 00:15:04.737 ] 00:15:04.737 } 00:15:04.737 } 00:15:04.737 }' 00:15:04.737 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:04.996 pt2 00:15:04.996 pt3' 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:04.996 [2024-11-27 11:53:31.345448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.996 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' af5c442f-cd11-4a58-b09f-c69150bdc3b4 '!=' af5c442f-cd11-4a58-b09f-c69150bdc3b4 ']' 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.256 [2024-11-27 11:53:31.393224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.256 "name": "raid_bdev1", 00:15:05.256 "uuid": "af5c442f-cd11-4a58-b09f-c69150bdc3b4", 00:15:05.256 "strip_size_kb": 64, 00:15:05.256 "state": "online", 00:15:05.256 "raid_level": "raid5f", 00:15:05.256 "superblock": true, 00:15:05.256 "num_base_bdevs": 3, 00:15:05.256 "num_base_bdevs_discovered": 2, 00:15:05.256 "num_base_bdevs_operational": 2, 00:15:05.256 "base_bdevs_list": [ 00:15:05.256 { 00:15:05.256 "name": null, 00:15:05.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.256 "is_configured": false, 00:15:05.256 "data_offset": 0, 00:15:05.256 "data_size": 63488 00:15:05.256 }, 00:15:05.256 { 00:15:05.256 "name": "pt2", 00:15:05.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.256 "is_configured": true, 00:15:05.256 "data_offset": 2048, 00:15:05.256 "data_size": 63488 00:15:05.256 }, 00:15:05.256 { 00:15:05.256 "name": "pt3", 00:15:05.256 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.256 "is_configured": true, 00:15:05.256 "data_offset": 2048, 00:15:05.256 "data_size": 63488 00:15:05.256 } 00:15:05.256 ] 00:15:05.256 }' 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.256 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.517 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:05.517 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.517 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.517 [2024-11-27 11:53:31.872351] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.517 [2024-11-27 11:53:31.872393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.517 [2024-11-27 11:53:31.872488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.517 [2024-11-27 11:53:31.872553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.517 [2024-11-27 11:53:31.872571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:05.517 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.517 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.517 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.517 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.517 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:05.517 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.776 [2024-11-27 11:53:31.944181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:05.776 [2024-11-27 11:53:31.944271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.776 [2024-11-27 11:53:31.944290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:05.776 [2024-11-27 11:53:31.944303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.776 [2024-11-27 11:53:31.946963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.776 [2024-11-27 11:53:31.947049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:05.776 [2024-11-27 11:53:31.947182] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:05.776 [2024-11-27 11:53:31.947269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:05.776 pt2 00:15:05.776 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.777 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:05.777 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.777 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.777 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.777 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.777 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.777 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.777 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.777 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.777 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.777 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.777 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.777 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.777 11:53:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.777 11:53:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.777 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.777 "name": "raid_bdev1", 00:15:05.777 "uuid": "af5c442f-cd11-4a58-b09f-c69150bdc3b4", 00:15:05.777 "strip_size_kb": 64, 00:15:05.777 "state": "configuring", 00:15:05.777 "raid_level": "raid5f", 00:15:05.777 "superblock": true, 00:15:05.777 "num_base_bdevs": 3, 00:15:05.777 "num_base_bdevs_discovered": 1, 00:15:05.777 "num_base_bdevs_operational": 2, 00:15:05.777 "base_bdevs_list": [ 00:15:05.777 { 00:15:05.777 "name": null, 00:15:05.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.777 "is_configured": false, 00:15:05.777 "data_offset": 2048, 00:15:05.777 "data_size": 63488 00:15:05.777 }, 00:15:05.777 { 00:15:05.777 "name": "pt2", 00:15:05.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.777 "is_configured": true, 00:15:05.777 "data_offset": 2048, 00:15:05.777 "data_size": 63488 00:15:05.777 }, 00:15:05.777 { 00:15:05.777 "name": null, 00:15:05.777 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.777 "is_configured": false, 00:15:05.777 "data_offset": 2048, 00:15:05.777 "data_size": 63488 00:15:05.777 } 00:15:05.777 ] 00:15:05.777 }' 00:15:05.777 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.777 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.346 [2024-11-27 11:53:32.447523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:06.346 [2024-11-27 11:53:32.447627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.346 [2024-11-27 11:53:32.447655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:06.346 [2024-11-27 11:53:32.447669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.346 [2024-11-27 11:53:32.448234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.346 [2024-11-27 11:53:32.448371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:06.346 [2024-11-27 11:53:32.448484] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:06.346 [2024-11-27 11:53:32.448520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:06.346 [2024-11-27 11:53:32.448694] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:06.346 [2024-11-27 11:53:32.448711] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:06.346 [2024-11-27 11:53:32.449087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:06.346 [2024-11-27 11:53:32.455795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:06.346 [2024-11-27 11:53:32.455891] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:06.346 [2024-11-27 11:53:32.456276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.346 pt3 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.346 "name": "raid_bdev1", 00:15:06.346 "uuid": "af5c442f-cd11-4a58-b09f-c69150bdc3b4", 00:15:06.346 "strip_size_kb": 64, 00:15:06.346 "state": "online", 00:15:06.346 "raid_level": "raid5f", 00:15:06.346 "superblock": true, 00:15:06.346 "num_base_bdevs": 3, 00:15:06.346 "num_base_bdevs_discovered": 2, 00:15:06.346 "num_base_bdevs_operational": 2, 00:15:06.346 "base_bdevs_list": [ 00:15:06.346 { 00:15:06.346 "name": null, 00:15:06.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.346 "is_configured": false, 00:15:06.346 "data_offset": 2048, 00:15:06.346 "data_size": 63488 00:15:06.346 }, 00:15:06.346 { 00:15:06.346 "name": "pt2", 00:15:06.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.346 "is_configured": true, 00:15:06.346 "data_offset": 2048, 00:15:06.346 "data_size": 63488 00:15:06.346 }, 00:15:06.346 { 00:15:06.346 "name": "pt3", 00:15:06.346 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.346 "is_configured": true, 00:15:06.346 "data_offset": 2048, 00:15:06.346 "data_size": 63488 00:15:06.346 } 00:15:06.346 ] 00:15:06.346 }' 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.346 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.606 [2024-11-27 11:53:32.912221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.606 [2024-11-27 11:53:32.912272] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.606 [2024-11-27 11:53:32.912370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.606 [2024-11-27 11:53:32.912445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.606 [2024-11-27 11:53:32.912474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.606 [2024-11-27 11:53:32.972110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:06.606 [2024-11-27 11:53:32.972263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.606 [2024-11-27 11:53:32.972313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:06.606 [2024-11-27 11:53:32.972356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.606 [2024-11-27 11:53:32.975231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.606 [2024-11-27 11:53:32.975330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:06.606 [2024-11-27 11:53:32.975462] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:06.606 [2024-11-27 11:53:32.975544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:06.606 [2024-11-27 11:53:32.975792] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:06.606 [2024-11-27 11:53:32.975897] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.606 [2024-11-27 11:53:32.976000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:06.606 [2024-11-27 11:53:32.976135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:06.606 pt1 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.606 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.607 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.607 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.607 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.607 11:53:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.607 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.866 11:53:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.866 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.866 "name": "raid_bdev1", 00:15:06.866 "uuid": "af5c442f-cd11-4a58-b09f-c69150bdc3b4", 00:15:06.866 "strip_size_kb": 64, 00:15:06.866 "state": "configuring", 00:15:06.866 "raid_level": "raid5f", 00:15:06.866 "superblock": true, 00:15:06.866 "num_base_bdevs": 3, 00:15:06.866 "num_base_bdevs_discovered": 1, 00:15:06.866 "num_base_bdevs_operational": 2, 00:15:06.866 "base_bdevs_list": [ 00:15:06.866 { 00:15:06.866 "name": null, 00:15:06.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.866 "is_configured": false, 00:15:06.866 "data_offset": 2048, 00:15:06.866 "data_size": 63488 00:15:06.866 }, 00:15:06.866 { 00:15:06.866 "name": "pt2", 00:15:06.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.866 "is_configured": true, 00:15:06.866 "data_offset": 2048, 00:15:06.866 "data_size": 63488 00:15:06.866 }, 00:15:06.866 { 00:15:06.866 "name": null, 00:15:06.866 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.866 "is_configured": false, 00:15:06.866 "data_offset": 2048, 00:15:06.866 "data_size": 63488 00:15:06.866 } 00:15:06.866 ] 00:15:06.866 }' 00:15:06.866 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.866 11:53:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.126 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:07.126 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:07.126 11:53:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.126 11:53:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.126 11:53:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.126 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:07.126 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:07.126 11:53:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.126 11:53:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.126 [2024-11-27 11:53:33.475645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:07.126 [2024-11-27 11:53:33.475797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.126 [2024-11-27 11:53:33.475857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:07.126 [2024-11-27 11:53:33.475874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.126 [2024-11-27 11:53:33.476511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.126 [2024-11-27 11:53:33.476545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:07.126 [2024-11-27 11:53:33.476654] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:07.126 [2024-11-27 11:53:33.476694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:07.126 [2024-11-27 11:53:33.476887] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:07.126 [2024-11-27 11:53:33.476900] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:07.126 [2024-11-27 11:53:33.477229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:07.126 [2024-11-27 11:53:33.484943] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:07.126 [2024-11-27 11:53:33.484977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:07.126 [2024-11-27 11:53:33.485301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.126 pt3 00:15:07.126 11:53:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.126 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:07.126 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.126 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.126 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.126 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.126 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:07.126 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.127 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.127 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.127 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.127 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.127 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.127 11:53:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.127 11:53:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.386 11:53:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.386 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.386 "name": "raid_bdev1", 00:15:07.386 "uuid": "af5c442f-cd11-4a58-b09f-c69150bdc3b4", 00:15:07.386 "strip_size_kb": 64, 00:15:07.386 "state": "online", 00:15:07.386 "raid_level": "raid5f", 00:15:07.386 "superblock": true, 00:15:07.386 "num_base_bdevs": 3, 00:15:07.386 "num_base_bdevs_discovered": 2, 00:15:07.386 "num_base_bdevs_operational": 2, 00:15:07.386 "base_bdevs_list": [ 00:15:07.386 { 00:15:07.386 "name": null, 00:15:07.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.386 "is_configured": false, 00:15:07.386 "data_offset": 2048, 00:15:07.386 "data_size": 63488 00:15:07.386 }, 00:15:07.386 { 00:15:07.386 "name": "pt2", 00:15:07.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.386 "is_configured": true, 00:15:07.386 "data_offset": 2048, 00:15:07.386 "data_size": 63488 00:15:07.386 }, 00:15:07.386 { 00:15:07.386 "name": "pt3", 00:15:07.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.386 "is_configured": true, 00:15:07.386 "data_offset": 2048, 00:15:07.386 "data_size": 63488 00:15:07.386 } 00:15:07.386 ] 00:15:07.386 }' 00:15:07.387 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.387 11:53:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.646 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:07.646 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:07.646 11:53:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.646 11:53:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.646 11:53:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.646 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:07.646 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:07.646 11:53:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:07.646 11:53:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.646 11:53:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.646 [2024-11-27 11:53:34.005683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.646 11:53:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.905 11:53:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' af5c442f-cd11-4a58-b09f-c69150bdc3b4 '!=' af5c442f-cd11-4a58-b09f-c69150bdc3b4 ']' 00:15:07.905 11:53:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81171 00:15:07.905 11:53:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81171 ']' 00:15:07.905 11:53:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81171 00:15:07.906 11:53:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:07.906 11:53:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.906 11:53:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81171 00:15:07.906 killing process with pid 81171 00:15:07.906 11:53:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.906 11:53:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.906 11:53:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81171' 00:15:07.906 11:53:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81171 00:15:07.906 [2024-11-27 11:53:34.080819] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.906 [2024-11-27 11:53:34.080950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.906 11:53:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81171 00:15:07.906 [2024-11-27 11:53:34.081023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.906 [2024-11-27 11:53:34.081038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:08.165 [2024-11-27 11:53:34.449570] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.542 ************************************ 00:15:09.542 END TEST raid5f_superblock_test 00:15:09.542 ************************************ 00:15:09.542 11:53:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:09.542 00:15:09.542 real 0m8.458s 00:15:09.542 user 0m13.103s 00:15:09.542 sys 0m1.466s 00:15:09.542 11:53:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.542 11:53:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.542 11:53:35 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:09.542 11:53:35 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:09.542 11:53:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:09.542 11:53:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.542 11:53:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.542 ************************************ 00:15:09.542 START TEST raid5f_rebuild_test 00:15:09.542 ************************************ 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81616 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81616 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81616 ']' 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.542 11:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.801 [2024-11-27 11:53:35.987244] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:15:09.801 [2024-11-27 11:53:35.987517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:09.801 Zero copy mechanism will not be used. 00:15:09.801 -allocations --file-prefix=spdk_pid81616 ] 00:15:09.801 [2024-11-27 11:53:36.168193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.060 [2024-11-27 11:53:36.302671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.323 [2024-11-27 11:53:36.545308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.323 [2024-11-27 11:53:36.545443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.603 BaseBdev1_malloc 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.603 [2024-11-27 11:53:36.915517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:10.603 [2024-11-27 11:53:36.915672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.603 [2024-11-27 11:53:36.915723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:10.603 [2024-11-27 11:53:36.915766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.603 [2024-11-27 11:53:36.918273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.603 [2024-11-27 11:53:36.918371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:10.603 BaseBdev1 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.603 BaseBdev2_malloc 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.603 11:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.870 [2024-11-27 11:53:36.977492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:10.870 [2024-11-27 11:53:36.977576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.870 [2024-11-27 11:53:36.977605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:10.870 [2024-11-27 11:53:36.977619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.870 [2024-11-27 11:53:36.980174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.870 [2024-11-27 11:53:36.980223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:10.870 BaseBdev2 00:15:10.870 11:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.870 11:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.870 11:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:10.870 11:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.870 11:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.870 BaseBdev3_malloc 00:15:10.870 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.870 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:10.870 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.871 [2024-11-27 11:53:37.048532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:10.871 [2024-11-27 11:53:37.048659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.871 [2024-11-27 11:53:37.048705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:10.871 [2024-11-27 11:53:37.048748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.871 [2024-11-27 11:53:37.051135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.871 [2024-11-27 11:53:37.051238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:10.871 BaseBdev3 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.871 spare_malloc 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.871 spare_delay 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.871 [2024-11-27 11:53:37.121052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:10.871 [2024-11-27 11:53:37.121118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.871 [2024-11-27 11:53:37.121151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:10.871 [2024-11-27 11:53:37.121163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.871 [2024-11-27 11:53:37.123500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.871 [2024-11-27 11:53:37.123637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:10.871 spare 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.871 [2024-11-27 11:53:37.133110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.871 [2024-11-27 11:53:37.135248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.871 [2024-11-27 11:53:37.135325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:10.871 [2024-11-27 11:53:37.135424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:10.871 [2024-11-27 11:53:37.135437] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:10.871 [2024-11-27 11:53:37.135783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:10.871 [2024-11-27 11:53:37.142744] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:10.871 [2024-11-27 11:53:37.142771] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:10.871 [2024-11-27 11:53:37.143007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.871 "name": "raid_bdev1", 00:15:10.871 "uuid": "42fb6624-b5f9-4263-b032-7e70a0aeea9a", 00:15:10.871 "strip_size_kb": 64, 00:15:10.871 "state": "online", 00:15:10.871 "raid_level": "raid5f", 00:15:10.871 "superblock": false, 00:15:10.871 "num_base_bdevs": 3, 00:15:10.871 "num_base_bdevs_discovered": 3, 00:15:10.871 "num_base_bdevs_operational": 3, 00:15:10.871 "base_bdevs_list": [ 00:15:10.871 { 00:15:10.871 "name": "BaseBdev1", 00:15:10.871 "uuid": "2a29d385-53bb-5edb-acad-10855d84e065", 00:15:10.871 "is_configured": true, 00:15:10.871 "data_offset": 0, 00:15:10.871 "data_size": 65536 00:15:10.871 }, 00:15:10.871 { 00:15:10.871 "name": "BaseBdev2", 00:15:10.871 "uuid": "88c7f561-ef8f-5914-9c01-9df15467107e", 00:15:10.871 "is_configured": true, 00:15:10.871 "data_offset": 0, 00:15:10.871 "data_size": 65536 00:15:10.871 }, 00:15:10.871 { 00:15:10.871 "name": "BaseBdev3", 00:15:10.871 "uuid": "e8d31599-2076-59e3-af31-87e06240274f", 00:15:10.871 "is_configured": true, 00:15:10.871 "data_offset": 0, 00:15:10.871 "data_size": 65536 00:15:10.871 } 00:15:10.871 ] 00:15:10.871 }' 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.871 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.439 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:11.439 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.439 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.439 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.439 [2024-11-27 11:53:37.638170] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.439 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.439 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:11.439 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.439 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:11.439 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.439 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.439 11:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.439 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:11.440 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:11.440 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:11.440 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:11.440 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:11.440 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.440 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:11.440 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:11.440 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:11.440 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:11.440 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:11.440 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:11.440 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:11.440 11:53:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:11.698 [2024-11-27 11:53:37.965444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:11.698 /dev/nbd0 00:15:11.698 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:11.698 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:11.698 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:11.698 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:11.698 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:11.698 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:11.698 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:11.698 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:11.698 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:11.698 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:11.698 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:11.698 1+0 records in 00:15:11.698 1+0 records out 00:15:11.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530594 s, 7.7 MB/s 00:15:11.698 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.698 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:11.699 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.699 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:11.699 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:11.699 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:11.699 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:11.699 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:11.699 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:11.699 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:11.699 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:12.267 512+0 records in 00:15:12.267 512+0 records out 00:15:12.267 67108864 bytes (67 MB, 64 MiB) copied, 0.474773 s, 141 MB/s 00:15:12.267 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:12.267 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.267 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:12.267 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:12.267 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:12.267 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.267 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:12.526 [2024-11-27 11:53:38.782149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.526 [2024-11-27 11:53:38.799043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.526 "name": "raid_bdev1", 00:15:12.526 "uuid": "42fb6624-b5f9-4263-b032-7e70a0aeea9a", 00:15:12.526 "strip_size_kb": 64, 00:15:12.526 "state": "online", 00:15:12.526 "raid_level": "raid5f", 00:15:12.526 "superblock": false, 00:15:12.526 "num_base_bdevs": 3, 00:15:12.526 "num_base_bdevs_discovered": 2, 00:15:12.526 "num_base_bdevs_operational": 2, 00:15:12.526 "base_bdevs_list": [ 00:15:12.526 { 00:15:12.526 "name": null, 00:15:12.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.526 "is_configured": false, 00:15:12.526 "data_offset": 0, 00:15:12.526 "data_size": 65536 00:15:12.526 }, 00:15:12.526 { 00:15:12.526 "name": "BaseBdev2", 00:15:12.526 "uuid": "88c7f561-ef8f-5914-9c01-9df15467107e", 00:15:12.526 "is_configured": true, 00:15:12.526 "data_offset": 0, 00:15:12.526 "data_size": 65536 00:15:12.526 }, 00:15:12.526 { 00:15:12.526 "name": "BaseBdev3", 00:15:12.526 "uuid": "e8d31599-2076-59e3-af31-87e06240274f", 00:15:12.526 "is_configured": true, 00:15:12.526 "data_offset": 0, 00:15:12.526 "data_size": 65536 00:15:12.526 } 00:15:12.526 ] 00:15:12.526 }' 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.526 11:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.094 11:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:13.094 11:53:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.094 11:53:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.094 [2024-11-27 11:53:39.274273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:13.094 [2024-11-27 11:53:39.296130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:13.094 11:53:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.094 11:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:13.094 [2024-11-27 11:53:39.306335] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:14.032 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.032 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.032 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.032 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.032 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.032 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.032 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.032 11:53:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.032 11:53:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.032 11:53:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.032 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.032 "name": "raid_bdev1", 00:15:14.032 "uuid": "42fb6624-b5f9-4263-b032-7e70a0aeea9a", 00:15:14.032 "strip_size_kb": 64, 00:15:14.032 "state": "online", 00:15:14.032 "raid_level": "raid5f", 00:15:14.032 "superblock": false, 00:15:14.032 "num_base_bdevs": 3, 00:15:14.032 "num_base_bdevs_discovered": 3, 00:15:14.032 "num_base_bdevs_operational": 3, 00:15:14.032 "process": { 00:15:14.032 "type": "rebuild", 00:15:14.032 "target": "spare", 00:15:14.032 "progress": { 00:15:14.032 "blocks": 20480, 00:15:14.032 "percent": 15 00:15:14.032 } 00:15:14.032 }, 00:15:14.032 "base_bdevs_list": [ 00:15:14.032 { 00:15:14.032 "name": "spare", 00:15:14.032 "uuid": "4ed6235e-5b57-5c2c-89e5-90c1c2c87ff1", 00:15:14.032 "is_configured": true, 00:15:14.032 "data_offset": 0, 00:15:14.032 "data_size": 65536 00:15:14.032 }, 00:15:14.032 { 00:15:14.032 "name": "BaseBdev2", 00:15:14.032 "uuid": "88c7f561-ef8f-5914-9c01-9df15467107e", 00:15:14.032 "is_configured": true, 00:15:14.032 "data_offset": 0, 00:15:14.032 "data_size": 65536 00:15:14.032 }, 00:15:14.032 { 00:15:14.032 "name": "BaseBdev3", 00:15:14.032 "uuid": "e8d31599-2076-59e3-af31-87e06240274f", 00:15:14.032 "is_configured": true, 00:15:14.032 "data_offset": 0, 00:15:14.032 "data_size": 65536 00:15:14.032 } 00:15:14.032 ] 00:15:14.032 }' 00:15:14.032 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.032 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.032 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.291 [2024-11-27 11:53:40.442126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:14.291 [2024-11-27 11:53:40.519948] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:14.291 [2024-11-27 11:53:40.520136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.291 [2024-11-27 11:53:40.520165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:14.291 [2024-11-27 11:53:40.520177] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.291 "name": "raid_bdev1", 00:15:14.291 "uuid": "42fb6624-b5f9-4263-b032-7e70a0aeea9a", 00:15:14.291 "strip_size_kb": 64, 00:15:14.291 "state": "online", 00:15:14.291 "raid_level": "raid5f", 00:15:14.291 "superblock": false, 00:15:14.291 "num_base_bdevs": 3, 00:15:14.291 "num_base_bdevs_discovered": 2, 00:15:14.291 "num_base_bdevs_operational": 2, 00:15:14.291 "base_bdevs_list": [ 00:15:14.291 { 00:15:14.291 "name": null, 00:15:14.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.291 "is_configured": false, 00:15:14.291 "data_offset": 0, 00:15:14.291 "data_size": 65536 00:15:14.291 }, 00:15:14.291 { 00:15:14.291 "name": "BaseBdev2", 00:15:14.291 "uuid": "88c7f561-ef8f-5914-9c01-9df15467107e", 00:15:14.291 "is_configured": true, 00:15:14.291 "data_offset": 0, 00:15:14.291 "data_size": 65536 00:15:14.291 }, 00:15:14.291 { 00:15:14.291 "name": "BaseBdev3", 00:15:14.291 "uuid": "e8d31599-2076-59e3-af31-87e06240274f", 00:15:14.291 "is_configured": true, 00:15:14.291 "data_offset": 0, 00:15:14.291 "data_size": 65536 00:15:14.291 } 00:15:14.291 ] 00:15:14.291 }' 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.291 11:53:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.870 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.871 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.871 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.871 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.871 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.871 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.871 11:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.871 11:53:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.871 11:53:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.871 11:53:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.871 11:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.871 "name": "raid_bdev1", 00:15:14.871 "uuid": "42fb6624-b5f9-4263-b032-7e70a0aeea9a", 00:15:14.871 "strip_size_kb": 64, 00:15:14.871 "state": "online", 00:15:14.871 "raid_level": "raid5f", 00:15:14.871 "superblock": false, 00:15:14.871 "num_base_bdevs": 3, 00:15:14.871 "num_base_bdevs_discovered": 2, 00:15:14.871 "num_base_bdevs_operational": 2, 00:15:14.871 "base_bdevs_list": [ 00:15:14.871 { 00:15:14.871 "name": null, 00:15:14.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.871 "is_configured": false, 00:15:14.871 "data_offset": 0, 00:15:14.871 "data_size": 65536 00:15:14.871 }, 00:15:14.871 { 00:15:14.871 "name": "BaseBdev2", 00:15:14.871 "uuid": "88c7f561-ef8f-5914-9c01-9df15467107e", 00:15:14.871 "is_configured": true, 00:15:14.871 "data_offset": 0, 00:15:14.871 "data_size": 65536 00:15:14.871 }, 00:15:14.872 { 00:15:14.872 "name": "BaseBdev3", 00:15:14.872 "uuid": "e8d31599-2076-59e3-af31-87e06240274f", 00:15:14.872 "is_configured": true, 00:15:14.872 "data_offset": 0, 00:15:14.872 "data_size": 65536 00:15:14.872 } 00:15:14.872 ] 00:15:14.872 }' 00:15:14.872 11:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.872 11:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.872 11:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.872 11:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.872 11:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:14.872 11:53:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.872 11:53:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.872 [2024-11-27 11:53:41.138303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.872 [2024-11-27 11:53:41.158688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:14.872 11:53:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.872 11:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:14.872 [2024-11-27 11:53:41.168466] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:15.853 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.853 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.853 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.853 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.853 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.853 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.853 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.853 11:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.853 11:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.853 11:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.853 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.853 "name": "raid_bdev1", 00:15:15.853 "uuid": "42fb6624-b5f9-4263-b032-7e70a0aeea9a", 00:15:15.853 "strip_size_kb": 64, 00:15:15.853 "state": "online", 00:15:15.853 "raid_level": "raid5f", 00:15:15.853 "superblock": false, 00:15:15.853 "num_base_bdevs": 3, 00:15:15.853 "num_base_bdevs_discovered": 3, 00:15:15.853 "num_base_bdevs_operational": 3, 00:15:15.853 "process": { 00:15:15.853 "type": "rebuild", 00:15:15.853 "target": "spare", 00:15:15.853 "progress": { 00:15:15.853 "blocks": 20480, 00:15:15.853 "percent": 15 00:15:15.853 } 00:15:15.853 }, 00:15:15.853 "base_bdevs_list": [ 00:15:15.853 { 00:15:15.853 "name": "spare", 00:15:15.853 "uuid": "4ed6235e-5b57-5c2c-89e5-90c1c2c87ff1", 00:15:15.853 "is_configured": true, 00:15:15.853 "data_offset": 0, 00:15:15.853 "data_size": 65536 00:15:15.853 }, 00:15:15.853 { 00:15:15.853 "name": "BaseBdev2", 00:15:15.853 "uuid": "88c7f561-ef8f-5914-9c01-9df15467107e", 00:15:15.853 "is_configured": true, 00:15:15.853 "data_offset": 0, 00:15:15.853 "data_size": 65536 00:15:15.853 }, 00:15:15.853 { 00:15:15.853 "name": "BaseBdev3", 00:15:15.853 "uuid": "e8d31599-2076-59e3-af31-87e06240274f", 00:15:15.853 "is_configured": true, 00:15:15.853 "data_offset": 0, 00:15:15.853 "data_size": 65536 00:15:15.853 } 00:15:15.853 ] 00:15:15.853 }' 00:15:15.853 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=557 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.112 "name": "raid_bdev1", 00:15:16.112 "uuid": "42fb6624-b5f9-4263-b032-7e70a0aeea9a", 00:15:16.112 "strip_size_kb": 64, 00:15:16.112 "state": "online", 00:15:16.112 "raid_level": "raid5f", 00:15:16.112 "superblock": false, 00:15:16.112 "num_base_bdevs": 3, 00:15:16.112 "num_base_bdevs_discovered": 3, 00:15:16.112 "num_base_bdevs_operational": 3, 00:15:16.112 "process": { 00:15:16.112 "type": "rebuild", 00:15:16.112 "target": "spare", 00:15:16.112 "progress": { 00:15:16.112 "blocks": 22528, 00:15:16.112 "percent": 17 00:15:16.112 } 00:15:16.112 }, 00:15:16.112 "base_bdevs_list": [ 00:15:16.112 { 00:15:16.112 "name": "spare", 00:15:16.112 "uuid": "4ed6235e-5b57-5c2c-89e5-90c1c2c87ff1", 00:15:16.112 "is_configured": true, 00:15:16.112 "data_offset": 0, 00:15:16.112 "data_size": 65536 00:15:16.112 }, 00:15:16.112 { 00:15:16.112 "name": "BaseBdev2", 00:15:16.112 "uuid": "88c7f561-ef8f-5914-9c01-9df15467107e", 00:15:16.112 "is_configured": true, 00:15:16.112 "data_offset": 0, 00:15:16.112 "data_size": 65536 00:15:16.112 }, 00:15:16.112 { 00:15:16.112 "name": "BaseBdev3", 00:15:16.112 "uuid": "e8d31599-2076-59e3-af31-87e06240274f", 00:15:16.112 "is_configured": true, 00:15:16.112 "data_offset": 0, 00:15:16.112 "data_size": 65536 00:15:16.112 } 00:15:16.112 ] 00:15:16.112 }' 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.112 11:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.105 11:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.105 11:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.105 11:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.105 11:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.105 11:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.105 11:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.105 11:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.105 11:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.105 11:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.105 11:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.105 11:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.364 11:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.364 "name": "raid_bdev1", 00:15:17.364 "uuid": "42fb6624-b5f9-4263-b032-7e70a0aeea9a", 00:15:17.364 "strip_size_kb": 64, 00:15:17.364 "state": "online", 00:15:17.364 "raid_level": "raid5f", 00:15:17.364 "superblock": false, 00:15:17.364 "num_base_bdevs": 3, 00:15:17.364 "num_base_bdevs_discovered": 3, 00:15:17.364 "num_base_bdevs_operational": 3, 00:15:17.364 "process": { 00:15:17.364 "type": "rebuild", 00:15:17.364 "target": "spare", 00:15:17.364 "progress": { 00:15:17.364 "blocks": 45056, 00:15:17.364 "percent": 34 00:15:17.364 } 00:15:17.364 }, 00:15:17.364 "base_bdevs_list": [ 00:15:17.364 { 00:15:17.364 "name": "spare", 00:15:17.364 "uuid": "4ed6235e-5b57-5c2c-89e5-90c1c2c87ff1", 00:15:17.364 "is_configured": true, 00:15:17.364 "data_offset": 0, 00:15:17.364 "data_size": 65536 00:15:17.364 }, 00:15:17.364 { 00:15:17.364 "name": "BaseBdev2", 00:15:17.364 "uuid": "88c7f561-ef8f-5914-9c01-9df15467107e", 00:15:17.364 "is_configured": true, 00:15:17.364 "data_offset": 0, 00:15:17.364 "data_size": 65536 00:15:17.364 }, 00:15:17.364 { 00:15:17.364 "name": "BaseBdev3", 00:15:17.364 "uuid": "e8d31599-2076-59e3-af31-87e06240274f", 00:15:17.364 "is_configured": true, 00:15:17.364 "data_offset": 0, 00:15:17.364 "data_size": 65536 00:15:17.364 } 00:15:17.364 ] 00:15:17.364 }' 00:15:17.364 11:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.364 11:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.364 11:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.364 11:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.364 11:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:18.301 11:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.301 11:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.301 11:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.301 11:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.301 11:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.301 11:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.301 11:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.301 11:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.301 11:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.301 11:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.301 11:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.301 11:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.301 "name": "raid_bdev1", 00:15:18.301 "uuid": "42fb6624-b5f9-4263-b032-7e70a0aeea9a", 00:15:18.301 "strip_size_kb": 64, 00:15:18.301 "state": "online", 00:15:18.301 "raid_level": "raid5f", 00:15:18.301 "superblock": false, 00:15:18.301 "num_base_bdevs": 3, 00:15:18.301 "num_base_bdevs_discovered": 3, 00:15:18.301 "num_base_bdevs_operational": 3, 00:15:18.301 "process": { 00:15:18.301 "type": "rebuild", 00:15:18.301 "target": "spare", 00:15:18.301 "progress": { 00:15:18.301 "blocks": 69632, 00:15:18.301 "percent": 53 00:15:18.301 } 00:15:18.301 }, 00:15:18.301 "base_bdevs_list": [ 00:15:18.301 { 00:15:18.301 "name": "spare", 00:15:18.301 "uuid": "4ed6235e-5b57-5c2c-89e5-90c1c2c87ff1", 00:15:18.301 "is_configured": true, 00:15:18.301 "data_offset": 0, 00:15:18.301 "data_size": 65536 00:15:18.301 }, 00:15:18.301 { 00:15:18.301 "name": "BaseBdev2", 00:15:18.301 "uuid": "88c7f561-ef8f-5914-9c01-9df15467107e", 00:15:18.301 "is_configured": true, 00:15:18.301 "data_offset": 0, 00:15:18.301 "data_size": 65536 00:15:18.301 }, 00:15:18.301 { 00:15:18.301 "name": "BaseBdev3", 00:15:18.301 "uuid": "e8d31599-2076-59e3-af31-87e06240274f", 00:15:18.301 "is_configured": true, 00:15:18.301 "data_offset": 0, 00:15:18.301 "data_size": 65536 00:15:18.301 } 00:15:18.301 ] 00:15:18.301 }' 00:15:18.301 11:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.561 11:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.561 11:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.561 11:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.561 11:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:19.497 11:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:19.497 11:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.497 11:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.497 11:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.497 11:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.497 11:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.497 11:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.497 11:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.497 11:53:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.497 11:53:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.497 11:53:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.497 11:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.497 "name": "raid_bdev1", 00:15:19.497 "uuid": "42fb6624-b5f9-4263-b032-7e70a0aeea9a", 00:15:19.497 "strip_size_kb": 64, 00:15:19.497 "state": "online", 00:15:19.497 "raid_level": "raid5f", 00:15:19.497 "superblock": false, 00:15:19.497 "num_base_bdevs": 3, 00:15:19.497 "num_base_bdevs_discovered": 3, 00:15:19.497 "num_base_bdevs_operational": 3, 00:15:19.497 "process": { 00:15:19.497 "type": "rebuild", 00:15:19.497 "target": "spare", 00:15:19.497 "progress": { 00:15:19.497 "blocks": 92160, 00:15:19.497 "percent": 70 00:15:19.497 } 00:15:19.497 }, 00:15:19.497 "base_bdevs_list": [ 00:15:19.497 { 00:15:19.497 "name": "spare", 00:15:19.497 "uuid": "4ed6235e-5b57-5c2c-89e5-90c1c2c87ff1", 00:15:19.497 "is_configured": true, 00:15:19.497 "data_offset": 0, 00:15:19.497 "data_size": 65536 00:15:19.497 }, 00:15:19.497 { 00:15:19.497 "name": "BaseBdev2", 00:15:19.497 "uuid": "88c7f561-ef8f-5914-9c01-9df15467107e", 00:15:19.497 "is_configured": true, 00:15:19.497 "data_offset": 0, 00:15:19.497 "data_size": 65536 00:15:19.497 }, 00:15:19.497 { 00:15:19.497 "name": "BaseBdev3", 00:15:19.497 "uuid": "e8d31599-2076-59e3-af31-87e06240274f", 00:15:19.497 "is_configured": true, 00:15:19.497 "data_offset": 0, 00:15:19.497 "data_size": 65536 00:15:19.497 } 00:15:19.497 ] 00:15:19.497 }' 00:15:19.497 11:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.497 11:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.497 11:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.756 11:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.756 11:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:20.691 11:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.691 11:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.691 11:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.691 11:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.691 11:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.691 11:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.691 11:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.691 11:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.691 11:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.691 11:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.691 11:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.691 11:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.691 "name": "raid_bdev1", 00:15:20.691 "uuid": "42fb6624-b5f9-4263-b032-7e70a0aeea9a", 00:15:20.691 "strip_size_kb": 64, 00:15:20.691 "state": "online", 00:15:20.691 "raid_level": "raid5f", 00:15:20.691 "superblock": false, 00:15:20.691 "num_base_bdevs": 3, 00:15:20.691 "num_base_bdevs_discovered": 3, 00:15:20.691 "num_base_bdevs_operational": 3, 00:15:20.691 "process": { 00:15:20.691 "type": "rebuild", 00:15:20.691 "target": "spare", 00:15:20.691 "progress": { 00:15:20.691 "blocks": 116736, 00:15:20.691 "percent": 89 00:15:20.691 } 00:15:20.691 }, 00:15:20.691 "base_bdevs_list": [ 00:15:20.691 { 00:15:20.691 "name": "spare", 00:15:20.691 "uuid": "4ed6235e-5b57-5c2c-89e5-90c1c2c87ff1", 00:15:20.691 "is_configured": true, 00:15:20.691 "data_offset": 0, 00:15:20.691 "data_size": 65536 00:15:20.691 }, 00:15:20.691 { 00:15:20.691 "name": "BaseBdev2", 00:15:20.691 "uuid": "88c7f561-ef8f-5914-9c01-9df15467107e", 00:15:20.691 "is_configured": true, 00:15:20.691 "data_offset": 0, 00:15:20.691 "data_size": 65536 00:15:20.692 }, 00:15:20.692 { 00:15:20.692 "name": "BaseBdev3", 00:15:20.692 "uuid": "e8d31599-2076-59e3-af31-87e06240274f", 00:15:20.692 "is_configured": true, 00:15:20.692 "data_offset": 0, 00:15:20.692 "data_size": 65536 00:15:20.692 } 00:15:20.692 ] 00:15:20.692 }' 00:15:20.692 11:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.692 11:53:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.692 11:53:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.692 11:53:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.692 11:53:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:21.257 [2024-11-27 11:53:47.634918] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:21.257 [2024-11-27 11:53:47.635139] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:21.257 [2024-11-27 11:53:47.635227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.833 "name": "raid_bdev1", 00:15:21.833 "uuid": "42fb6624-b5f9-4263-b032-7e70a0aeea9a", 00:15:21.833 "strip_size_kb": 64, 00:15:21.833 "state": "online", 00:15:21.833 "raid_level": "raid5f", 00:15:21.833 "superblock": false, 00:15:21.833 "num_base_bdevs": 3, 00:15:21.833 "num_base_bdevs_discovered": 3, 00:15:21.833 "num_base_bdevs_operational": 3, 00:15:21.833 "base_bdevs_list": [ 00:15:21.833 { 00:15:21.833 "name": "spare", 00:15:21.833 "uuid": "4ed6235e-5b57-5c2c-89e5-90c1c2c87ff1", 00:15:21.833 "is_configured": true, 00:15:21.833 "data_offset": 0, 00:15:21.833 "data_size": 65536 00:15:21.833 }, 00:15:21.833 { 00:15:21.833 "name": "BaseBdev2", 00:15:21.833 "uuid": "88c7f561-ef8f-5914-9c01-9df15467107e", 00:15:21.833 "is_configured": true, 00:15:21.833 "data_offset": 0, 00:15:21.833 "data_size": 65536 00:15:21.833 }, 00:15:21.833 { 00:15:21.833 "name": "BaseBdev3", 00:15:21.833 "uuid": "e8d31599-2076-59e3-af31-87e06240274f", 00:15:21.833 "is_configured": true, 00:15:21.833 "data_offset": 0, 00:15:21.833 "data_size": 65536 00:15:21.833 } 00:15:21.833 ] 00:15:21.833 }' 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.833 11:53:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.093 "name": "raid_bdev1", 00:15:22.093 "uuid": "42fb6624-b5f9-4263-b032-7e70a0aeea9a", 00:15:22.093 "strip_size_kb": 64, 00:15:22.093 "state": "online", 00:15:22.093 "raid_level": "raid5f", 00:15:22.093 "superblock": false, 00:15:22.093 "num_base_bdevs": 3, 00:15:22.093 "num_base_bdevs_discovered": 3, 00:15:22.093 "num_base_bdevs_operational": 3, 00:15:22.093 "base_bdevs_list": [ 00:15:22.093 { 00:15:22.093 "name": "spare", 00:15:22.093 "uuid": "4ed6235e-5b57-5c2c-89e5-90c1c2c87ff1", 00:15:22.093 "is_configured": true, 00:15:22.093 "data_offset": 0, 00:15:22.093 "data_size": 65536 00:15:22.093 }, 00:15:22.093 { 00:15:22.093 "name": "BaseBdev2", 00:15:22.093 "uuid": "88c7f561-ef8f-5914-9c01-9df15467107e", 00:15:22.093 "is_configured": true, 00:15:22.093 "data_offset": 0, 00:15:22.093 "data_size": 65536 00:15:22.093 }, 00:15:22.093 { 00:15:22.093 "name": "BaseBdev3", 00:15:22.093 "uuid": "e8d31599-2076-59e3-af31-87e06240274f", 00:15:22.093 "is_configured": true, 00:15:22.093 "data_offset": 0, 00:15:22.093 "data_size": 65536 00:15:22.093 } 00:15:22.093 ] 00:15:22.093 }' 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.093 "name": "raid_bdev1", 00:15:22.093 "uuid": "42fb6624-b5f9-4263-b032-7e70a0aeea9a", 00:15:22.093 "strip_size_kb": 64, 00:15:22.093 "state": "online", 00:15:22.093 "raid_level": "raid5f", 00:15:22.093 "superblock": false, 00:15:22.093 "num_base_bdevs": 3, 00:15:22.093 "num_base_bdevs_discovered": 3, 00:15:22.093 "num_base_bdevs_operational": 3, 00:15:22.093 "base_bdevs_list": [ 00:15:22.093 { 00:15:22.093 "name": "spare", 00:15:22.093 "uuid": "4ed6235e-5b57-5c2c-89e5-90c1c2c87ff1", 00:15:22.093 "is_configured": true, 00:15:22.093 "data_offset": 0, 00:15:22.093 "data_size": 65536 00:15:22.093 }, 00:15:22.093 { 00:15:22.093 "name": "BaseBdev2", 00:15:22.093 "uuid": "88c7f561-ef8f-5914-9c01-9df15467107e", 00:15:22.093 "is_configured": true, 00:15:22.093 "data_offset": 0, 00:15:22.093 "data_size": 65536 00:15:22.093 }, 00:15:22.093 { 00:15:22.093 "name": "BaseBdev3", 00:15:22.093 "uuid": "e8d31599-2076-59e3-af31-87e06240274f", 00:15:22.093 "is_configured": true, 00:15:22.093 "data_offset": 0, 00:15:22.093 "data_size": 65536 00:15:22.093 } 00:15:22.093 ] 00:15:22.093 }' 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.093 11:53:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.661 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:22.661 11:53:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.661 11:53:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.661 [2024-11-27 11:53:48.797015] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:22.661 [2024-11-27 11:53:48.797130] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.661 [2024-11-27 11:53:48.797272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.661 [2024-11-27 11:53:48.797390] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.661 [2024-11-27 11:53:48.797455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:22.661 11:53:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.661 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:22.661 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.661 11:53:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.661 11:53:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.661 11:53:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.661 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:22.661 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:22.661 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:22.661 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:22.662 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.662 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:22.662 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:22.662 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:22.662 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:22.662 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:22.662 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:22.662 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:22.662 11:53:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:22.920 /dev/nbd0 00:15:22.920 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:22.920 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:22.920 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:22.920 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:22.920 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:22.920 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:22.920 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:22.920 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:22.920 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:22.920 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:22.920 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.920 1+0 records in 00:15:22.920 1+0 records out 00:15:22.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402832 s, 10.2 MB/s 00:15:22.921 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.921 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:22.921 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.921 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:22.921 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:22.921 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.921 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:22.921 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:23.179 /dev/nbd1 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.179 1+0 records in 00:15:23.179 1+0 records out 00:15:23.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397268 s, 10.3 MB/s 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:23.179 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:23.437 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:23.437 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:23.437 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:23.437 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:23.437 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:23.437 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.437 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:23.695 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:23.695 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:23.695 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:23.695 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.695 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.695 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:23.695 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:23.695 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.695 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.695 11:53:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81616 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81616 ']' 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81616 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81616 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.955 killing process with pid 81616 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81616' 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81616 00:15:23.955 Received shutdown signal, test time was about 60.000000 seconds 00:15:23.955 00:15:23.955 Latency(us) 00:15:23.955 [2024-11-27T11:53:50.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.955 [2024-11-27T11:53:50.340Z] =================================================================================================================== 00:15:23.955 [2024-11-27T11:53:50.340Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:23.955 [2024-11-27 11:53:50.213540] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:23.955 11:53:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81616 00:15:24.523 [2024-11-27 11:53:50.703338] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:25.900 00:15:25.900 real 0m16.200s 00:15:25.900 user 0m19.813s 00:15:25.900 sys 0m2.278s 00:15:25.900 ************************************ 00:15:25.900 END TEST raid5f_rebuild_test 00:15:25.900 ************************************ 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.900 11:53:52 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:25.900 11:53:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:25.900 11:53:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.900 11:53:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:25.900 ************************************ 00:15:25.900 START TEST raid5f_rebuild_test_sb 00:15:25.900 ************************************ 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82063 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82063 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82063 ']' 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.900 11:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.900 [2024-11-27 11:53:52.248194] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:15:25.900 [2024-11-27 11:53:52.248417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:25.900 Zero copy mechanism will not be used. 00:15:25.900 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82063 ] 00:15:26.159 [2024-11-27 11:53:52.427747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.418 [2024-11-27 11:53:52.562735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.677 [2024-11-27 11:53:52.803923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.677 [2024-11-27 11:53:52.804086] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.936 BaseBdev1_malloc 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.936 [2024-11-27 11:53:53.204172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:26.936 [2024-11-27 11:53:53.204248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.936 [2024-11-27 11:53:53.204274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:26.936 [2024-11-27 11:53:53.204287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.936 [2024-11-27 11:53:53.206750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.936 [2024-11-27 11:53:53.206800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:26.936 BaseBdev1 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.936 BaseBdev2_malloc 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.936 [2024-11-27 11:53:53.265158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:26.936 [2024-11-27 11:53:53.265301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.936 [2024-11-27 11:53:53.265350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:26.936 [2024-11-27 11:53:53.265394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.936 [2024-11-27 11:53:53.267899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.936 [2024-11-27 11:53:53.267986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:26.936 BaseBdev2 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.936 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.195 BaseBdev3_malloc 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.195 [2024-11-27 11:53:53.339938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:27.195 [2024-11-27 11:53:53.340065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.195 [2024-11-27 11:53:53.340122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:27.195 [2024-11-27 11:53:53.340163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.195 [2024-11-27 11:53:53.342619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.195 [2024-11-27 11:53:53.342707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:27.195 BaseBdev3 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.195 spare_malloc 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.195 spare_delay 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.195 [2024-11-27 11:53:53.414131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:27.195 [2024-11-27 11:53:53.414267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.195 [2024-11-27 11:53:53.414312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:27.195 [2024-11-27 11:53:53.414361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.195 [2024-11-27 11:53:53.416897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.195 [2024-11-27 11:53:53.416991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:27.195 spare 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.195 [2024-11-27 11:53:53.426193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.195 [2024-11-27 11:53:53.428355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.195 [2024-11-27 11:53:53.428478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.195 [2024-11-27 11:53:53.428715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:27.195 [2024-11-27 11:53:53.428771] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:27.195 [2024-11-27 11:53:53.429130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:27.195 [2024-11-27 11:53:53.436034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:27.195 [2024-11-27 11:53:53.436105] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:27.195 [2024-11-27 11:53:53.436383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.195 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.196 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.196 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.196 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.196 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.196 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.196 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.196 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.196 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.196 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.196 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.196 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.196 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.196 "name": "raid_bdev1", 00:15:27.196 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:27.196 "strip_size_kb": 64, 00:15:27.196 "state": "online", 00:15:27.196 "raid_level": "raid5f", 00:15:27.196 "superblock": true, 00:15:27.196 "num_base_bdevs": 3, 00:15:27.196 "num_base_bdevs_discovered": 3, 00:15:27.196 "num_base_bdevs_operational": 3, 00:15:27.196 "base_bdevs_list": [ 00:15:27.196 { 00:15:27.196 "name": "BaseBdev1", 00:15:27.196 "uuid": "fa8910b5-83d3-5f92-b84d-3037d84fb776", 00:15:27.196 "is_configured": true, 00:15:27.196 "data_offset": 2048, 00:15:27.196 "data_size": 63488 00:15:27.196 }, 00:15:27.196 { 00:15:27.196 "name": "BaseBdev2", 00:15:27.196 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:27.196 "is_configured": true, 00:15:27.196 "data_offset": 2048, 00:15:27.196 "data_size": 63488 00:15:27.196 }, 00:15:27.196 { 00:15:27.196 "name": "BaseBdev3", 00:15:27.196 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:27.196 "is_configured": true, 00:15:27.196 "data_offset": 2048, 00:15:27.196 "data_size": 63488 00:15:27.196 } 00:15:27.196 ] 00:15:27.196 }' 00:15:27.196 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.196 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.765 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:27.765 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.765 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.765 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:27.765 [2024-11-27 11:53:53.907363] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.765 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.765 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:27.765 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.765 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:27.765 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.765 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.765 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.765 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:27.765 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:27.765 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:27.765 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:27.765 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:27.765 11:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:27.765 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:27.765 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:27.765 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:27.765 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:27.765 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:27.765 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:27.765 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:27.765 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:28.025 [2024-11-27 11:53:54.226911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:28.025 /dev/nbd0 00:15:28.025 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:28.025 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:28.025 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:28.025 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:28.025 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.025 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.025 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:28.025 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:28.025 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.025 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.025 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.025 1+0 records in 00:15:28.025 1+0 records out 00:15:28.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589771 s, 6.9 MB/s 00:15:28.025 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.025 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:28.025 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.025 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.025 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:28.025 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.025 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:28.026 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:28.026 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:28.026 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:28.026 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:28.593 496+0 records in 00:15:28.593 496+0 records out 00:15:28.593 65011712 bytes (65 MB, 62 MiB) copied, 0.467088 s, 139 MB/s 00:15:28.593 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:28.593 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:28.593 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:28.593 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:28.593 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:28.593 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:28.593 11:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:28.852 [2024-11-27 11:53:55.007781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.852 [2024-11-27 11:53:55.044908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.852 "name": "raid_bdev1", 00:15:28.852 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:28.852 "strip_size_kb": 64, 00:15:28.852 "state": "online", 00:15:28.852 "raid_level": "raid5f", 00:15:28.852 "superblock": true, 00:15:28.852 "num_base_bdevs": 3, 00:15:28.852 "num_base_bdevs_discovered": 2, 00:15:28.852 "num_base_bdevs_operational": 2, 00:15:28.852 "base_bdevs_list": [ 00:15:28.852 { 00:15:28.852 "name": null, 00:15:28.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.852 "is_configured": false, 00:15:28.852 "data_offset": 0, 00:15:28.852 "data_size": 63488 00:15:28.852 }, 00:15:28.852 { 00:15:28.852 "name": "BaseBdev2", 00:15:28.852 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:28.852 "is_configured": true, 00:15:28.852 "data_offset": 2048, 00:15:28.852 "data_size": 63488 00:15:28.852 }, 00:15:28.852 { 00:15:28.852 "name": "BaseBdev3", 00:15:28.852 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:28.852 "is_configured": true, 00:15:28.852 "data_offset": 2048, 00:15:28.852 "data_size": 63488 00:15:28.852 } 00:15:28.852 ] 00:15:28.852 }' 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.852 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.419 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:29.419 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.419 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.419 [2024-11-27 11:53:55.524114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.419 [2024-11-27 11:53:55.545280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:29.419 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.419 11:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:29.419 [2024-11-27 11:53:55.555061] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:30.354 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.354 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.354 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.354 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.354 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.354 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.354 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.354 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.354 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.354 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.354 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.354 "name": "raid_bdev1", 00:15:30.354 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:30.354 "strip_size_kb": 64, 00:15:30.354 "state": "online", 00:15:30.354 "raid_level": "raid5f", 00:15:30.354 "superblock": true, 00:15:30.354 "num_base_bdevs": 3, 00:15:30.354 "num_base_bdevs_discovered": 3, 00:15:30.354 "num_base_bdevs_operational": 3, 00:15:30.354 "process": { 00:15:30.354 "type": "rebuild", 00:15:30.354 "target": "spare", 00:15:30.354 "progress": { 00:15:30.354 "blocks": 20480, 00:15:30.354 "percent": 16 00:15:30.354 } 00:15:30.354 }, 00:15:30.354 "base_bdevs_list": [ 00:15:30.354 { 00:15:30.354 "name": "spare", 00:15:30.354 "uuid": "9424bf04-a069-5946-a191-01f26f059d5b", 00:15:30.354 "is_configured": true, 00:15:30.354 "data_offset": 2048, 00:15:30.354 "data_size": 63488 00:15:30.354 }, 00:15:30.354 { 00:15:30.354 "name": "BaseBdev2", 00:15:30.354 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:30.354 "is_configured": true, 00:15:30.354 "data_offset": 2048, 00:15:30.354 "data_size": 63488 00:15:30.354 }, 00:15:30.354 { 00:15:30.354 "name": "BaseBdev3", 00:15:30.354 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:30.354 "is_configured": true, 00:15:30.354 "data_offset": 2048, 00:15:30.354 "data_size": 63488 00:15:30.354 } 00:15:30.354 ] 00:15:30.354 }' 00:15:30.354 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.354 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.354 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.354 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.354 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:30.354 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.354 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.354 [2024-11-27 11:53:56.714940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:30.614 [2024-11-27 11:53:56.767116] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:30.614 [2024-11-27 11:53:56.767195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.614 [2024-11-27 11:53:56.767220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:30.614 [2024-11-27 11:53:56.767231] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.614 "name": "raid_bdev1", 00:15:30.614 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:30.614 "strip_size_kb": 64, 00:15:30.614 "state": "online", 00:15:30.614 "raid_level": "raid5f", 00:15:30.614 "superblock": true, 00:15:30.614 "num_base_bdevs": 3, 00:15:30.614 "num_base_bdevs_discovered": 2, 00:15:30.614 "num_base_bdevs_operational": 2, 00:15:30.614 "base_bdevs_list": [ 00:15:30.614 { 00:15:30.614 "name": null, 00:15:30.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.614 "is_configured": false, 00:15:30.614 "data_offset": 0, 00:15:30.614 "data_size": 63488 00:15:30.614 }, 00:15:30.614 { 00:15:30.614 "name": "BaseBdev2", 00:15:30.614 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:30.614 "is_configured": true, 00:15:30.614 "data_offset": 2048, 00:15:30.614 "data_size": 63488 00:15:30.614 }, 00:15:30.614 { 00:15:30.614 "name": "BaseBdev3", 00:15:30.614 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:30.614 "is_configured": true, 00:15:30.614 "data_offset": 2048, 00:15:30.614 "data_size": 63488 00:15:30.614 } 00:15:30.614 ] 00:15:30.614 }' 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.614 11:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.182 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:31.182 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.182 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:31.182 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:31.182 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.182 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.182 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.182 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.182 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.182 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.182 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.182 "name": "raid_bdev1", 00:15:31.182 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:31.182 "strip_size_kb": 64, 00:15:31.182 "state": "online", 00:15:31.182 "raid_level": "raid5f", 00:15:31.183 "superblock": true, 00:15:31.183 "num_base_bdevs": 3, 00:15:31.183 "num_base_bdevs_discovered": 2, 00:15:31.183 "num_base_bdevs_operational": 2, 00:15:31.183 "base_bdevs_list": [ 00:15:31.183 { 00:15:31.183 "name": null, 00:15:31.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.183 "is_configured": false, 00:15:31.183 "data_offset": 0, 00:15:31.183 "data_size": 63488 00:15:31.183 }, 00:15:31.183 { 00:15:31.183 "name": "BaseBdev2", 00:15:31.183 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:31.183 "is_configured": true, 00:15:31.183 "data_offset": 2048, 00:15:31.183 "data_size": 63488 00:15:31.183 }, 00:15:31.183 { 00:15:31.183 "name": "BaseBdev3", 00:15:31.183 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:31.183 "is_configured": true, 00:15:31.183 "data_offset": 2048, 00:15:31.183 "data_size": 63488 00:15:31.183 } 00:15:31.183 ] 00:15:31.183 }' 00:15:31.183 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.183 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:31.183 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.183 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:31.183 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:31.183 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.183 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.183 [2024-11-27 11:53:57.429487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:31.183 [2024-11-27 11:53:57.449825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:31.183 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.183 11:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:31.183 [2024-11-27 11:53:57.460006] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:32.120 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.120 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.120 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.120 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.120 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.120 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.120 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.120 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.120 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.120 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.380 "name": "raid_bdev1", 00:15:32.380 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:32.380 "strip_size_kb": 64, 00:15:32.380 "state": "online", 00:15:32.380 "raid_level": "raid5f", 00:15:32.380 "superblock": true, 00:15:32.380 "num_base_bdevs": 3, 00:15:32.380 "num_base_bdevs_discovered": 3, 00:15:32.380 "num_base_bdevs_operational": 3, 00:15:32.380 "process": { 00:15:32.380 "type": "rebuild", 00:15:32.380 "target": "spare", 00:15:32.380 "progress": { 00:15:32.380 "blocks": 20480, 00:15:32.380 "percent": 16 00:15:32.380 } 00:15:32.380 }, 00:15:32.380 "base_bdevs_list": [ 00:15:32.380 { 00:15:32.380 "name": "spare", 00:15:32.380 "uuid": "9424bf04-a069-5946-a191-01f26f059d5b", 00:15:32.380 "is_configured": true, 00:15:32.380 "data_offset": 2048, 00:15:32.380 "data_size": 63488 00:15:32.380 }, 00:15:32.380 { 00:15:32.380 "name": "BaseBdev2", 00:15:32.380 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:32.380 "is_configured": true, 00:15:32.380 "data_offset": 2048, 00:15:32.380 "data_size": 63488 00:15:32.380 }, 00:15:32.380 { 00:15:32.380 "name": "BaseBdev3", 00:15:32.380 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:32.380 "is_configured": true, 00:15:32.380 "data_offset": 2048, 00:15:32.380 "data_size": 63488 00:15:32.380 } 00:15:32.380 ] 00:15:32.380 }' 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:32.380 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=573 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.380 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.380 "name": "raid_bdev1", 00:15:32.380 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:32.380 "strip_size_kb": 64, 00:15:32.380 "state": "online", 00:15:32.381 "raid_level": "raid5f", 00:15:32.381 "superblock": true, 00:15:32.381 "num_base_bdevs": 3, 00:15:32.381 "num_base_bdevs_discovered": 3, 00:15:32.381 "num_base_bdevs_operational": 3, 00:15:32.381 "process": { 00:15:32.381 "type": "rebuild", 00:15:32.381 "target": "spare", 00:15:32.381 "progress": { 00:15:32.381 "blocks": 22528, 00:15:32.381 "percent": 17 00:15:32.381 } 00:15:32.381 }, 00:15:32.381 "base_bdevs_list": [ 00:15:32.381 { 00:15:32.381 "name": "spare", 00:15:32.381 "uuid": "9424bf04-a069-5946-a191-01f26f059d5b", 00:15:32.381 "is_configured": true, 00:15:32.381 "data_offset": 2048, 00:15:32.381 "data_size": 63488 00:15:32.381 }, 00:15:32.381 { 00:15:32.381 "name": "BaseBdev2", 00:15:32.381 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:32.381 "is_configured": true, 00:15:32.381 "data_offset": 2048, 00:15:32.381 "data_size": 63488 00:15:32.381 }, 00:15:32.381 { 00:15:32.381 "name": "BaseBdev3", 00:15:32.381 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:32.381 "is_configured": true, 00:15:32.381 "data_offset": 2048, 00:15:32.381 "data_size": 63488 00:15:32.381 } 00:15:32.381 ] 00:15:32.381 }' 00:15:32.381 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.381 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.381 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.381 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.381 11:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:33.758 11:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:33.758 11:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.758 11:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.758 11:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.758 11:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.758 11:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.758 11:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.758 11:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.758 11:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.758 11:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.758 11:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.758 11:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.758 "name": "raid_bdev1", 00:15:33.758 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:33.758 "strip_size_kb": 64, 00:15:33.758 "state": "online", 00:15:33.758 "raid_level": "raid5f", 00:15:33.758 "superblock": true, 00:15:33.758 "num_base_bdevs": 3, 00:15:33.758 "num_base_bdevs_discovered": 3, 00:15:33.758 "num_base_bdevs_operational": 3, 00:15:33.758 "process": { 00:15:33.758 "type": "rebuild", 00:15:33.758 "target": "spare", 00:15:33.758 "progress": { 00:15:33.758 "blocks": 45056, 00:15:33.758 "percent": 35 00:15:33.758 } 00:15:33.758 }, 00:15:33.758 "base_bdevs_list": [ 00:15:33.758 { 00:15:33.758 "name": "spare", 00:15:33.758 "uuid": "9424bf04-a069-5946-a191-01f26f059d5b", 00:15:33.758 "is_configured": true, 00:15:33.758 "data_offset": 2048, 00:15:33.758 "data_size": 63488 00:15:33.758 }, 00:15:33.758 { 00:15:33.758 "name": "BaseBdev2", 00:15:33.758 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:33.758 "is_configured": true, 00:15:33.758 "data_offset": 2048, 00:15:33.758 "data_size": 63488 00:15:33.758 }, 00:15:33.758 { 00:15:33.758 "name": "BaseBdev3", 00:15:33.758 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:33.758 "is_configured": true, 00:15:33.758 "data_offset": 2048, 00:15:33.758 "data_size": 63488 00:15:33.758 } 00:15:33.758 ] 00:15:33.758 }' 00:15:33.758 11:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.758 11:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:33.758 11:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.758 11:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.758 11:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:34.696 11:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:34.696 11:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.696 11:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.696 11:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.696 11:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.696 11:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.696 11:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.696 11:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.696 11:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.696 11:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.696 11:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.696 11:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.696 "name": "raid_bdev1", 00:15:34.696 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:34.696 "strip_size_kb": 64, 00:15:34.696 "state": "online", 00:15:34.696 "raid_level": "raid5f", 00:15:34.696 "superblock": true, 00:15:34.696 "num_base_bdevs": 3, 00:15:34.696 "num_base_bdevs_discovered": 3, 00:15:34.696 "num_base_bdevs_operational": 3, 00:15:34.696 "process": { 00:15:34.696 "type": "rebuild", 00:15:34.696 "target": "spare", 00:15:34.696 "progress": { 00:15:34.696 "blocks": 69632, 00:15:34.696 "percent": 54 00:15:34.696 } 00:15:34.696 }, 00:15:34.696 "base_bdevs_list": [ 00:15:34.696 { 00:15:34.697 "name": "spare", 00:15:34.697 "uuid": "9424bf04-a069-5946-a191-01f26f059d5b", 00:15:34.697 "is_configured": true, 00:15:34.697 "data_offset": 2048, 00:15:34.697 "data_size": 63488 00:15:34.697 }, 00:15:34.697 { 00:15:34.697 "name": "BaseBdev2", 00:15:34.697 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:34.697 "is_configured": true, 00:15:34.697 "data_offset": 2048, 00:15:34.697 "data_size": 63488 00:15:34.697 }, 00:15:34.697 { 00:15:34.697 "name": "BaseBdev3", 00:15:34.697 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:34.697 "is_configured": true, 00:15:34.697 "data_offset": 2048, 00:15:34.697 "data_size": 63488 00:15:34.697 } 00:15:34.697 ] 00:15:34.697 }' 00:15:34.697 11:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.697 11:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.697 11:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.697 11:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.697 11:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:36.076 11:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.076 11:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.076 11:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.076 11:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.076 11:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.076 11:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.076 11:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.076 11:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.076 11:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.076 11:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.076 11:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.076 11:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.076 "name": "raid_bdev1", 00:15:36.076 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:36.076 "strip_size_kb": 64, 00:15:36.076 "state": "online", 00:15:36.076 "raid_level": "raid5f", 00:15:36.076 "superblock": true, 00:15:36.076 "num_base_bdevs": 3, 00:15:36.076 "num_base_bdevs_discovered": 3, 00:15:36.076 "num_base_bdevs_operational": 3, 00:15:36.076 "process": { 00:15:36.076 "type": "rebuild", 00:15:36.076 "target": "spare", 00:15:36.076 "progress": { 00:15:36.076 "blocks": 92160, 00:15:36.076 "percent": 72 00:15:36.076 } 00:15:36.076 }, 00:15:36.076 "base_bdevs_list": [ 00:15:36.076 { 00:15:36.076 "name": "spare", 00:15:36.076 "uuid": "9424bf04-a069-5946-a191-01f26f059d5b", 00:15:36.076 "is_configured": true, 00:15:36.076 "data_offset": 2048, 00:15:36.076 "data_size": 63488 00:15:36.076 }, 00:15:36.076 { 00:15:36.076 "name": "BaseBdev2", 00:15:36.076 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:36.076 "is_configured": true, 00:15:36.076 "data_offset": 2048, 00:15:36.076 "data_size": 63488 00:15:36.076 }, 00:15:36.076 { 00:15:36.076 "name": "BaseBdev3", 00:15:36.076 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:36.076 "is_configured": true, 00:15:36.076 "data_offset": 2048, 00:15:36.076 "data_size": 63488 00:15:36.076 } 00:15:36.076 ] 00:15:36.076 }' 00:15:36.076 11:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.076 11:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.076 11:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.076 11:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.076 11:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.013 11:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.013 11:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.013 11:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.013 11:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.013 11:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.013 11:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.013 11:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.013 11:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.013 11:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.013 11:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.013 11:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.013 11:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.013 "name": "raid_bdev1", 00:15:37.013 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:37.013 "strip_size_kb": 64, 00:15:37.013 "state": "online", 00:15:37.013 "raid_level": "raid5f", 00:15:37.013 "superblock": true, 00:15:37.013 "num_base_bdevs": 3, 00:15:37.013 "num_base_bdevs_discovered": 3, 00:15:37.013 "num_base_bdevs_operational": 3, 00:15:37.013 "process": { 00:15:37.013 "type": "rebuild", 00:15:37.013 "target": "spare", 00:15:37.013 "progress": { 00:15:37.013 "blocks": 114688, 00:15:37.013 "percent": 90 00:15:37.013 } 00:15:37.013 }, 00:15:37.013 "base_bdevs_list": [ 00:15:37.013 { 00:15:37.013 "name": "spare", 00:15:37.013 "uuid": "9424bf04-a069-5946-a191-01f26f059d5b", 00:15:37.013 "is_configured": true, 00:15:37.013 "data_offset": 2048, 00:15:37.013 "data_size": 63488 00:15:37.013 }, 00:15:37.013 { 00:15:37.013 "name": "BaseBdev2", 00:15:37.013 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:37.013 "is_configured": true, 00:15:37.013 "data_offset": 2048, 00:15:37.013 "data_size": 63488 00:15:37.013 }, 00:15:37.013 { 00:15:37.013 "name": "BaseBdev3", 00:15:37.013 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:37.013 "is_configured": true, 00:15:37.013 "data_offset": 2048, 00:15:37.013 "data_size": 63488 00:15:37.013 } 00:15:37.013 ] 00:15:37.013 }' 00:15:37.013 11:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.013 11:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.013 11:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.013 11:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.013 11:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.581 [2024-11-27 11:54:03.721912] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:37.581 [2024-11-27 11:54:03.722118] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:37.581 [2024-11-27 11:54:03.722302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.149 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.150 "name": "raid_bdev1", 00:15:38.150 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:38.150 "strip_size_kb": 64, 00:15:38.150 "state": "online", 00:15:38.150 "raid_level": "raid5f", 00:15:38.150 "superblock": true, 00:15:38.150 "num_base_bdevs": 3, 00:15:38.150 "num_base_bdevs_discovered": 3, 00:15:38.150 "num_base_bdevs_operational": 3, 00:15:38.150 "base_bdevs_list": [ 00:15:38.150 { 00:15:38.150 "name": "spare", 00:15:38.150 "uuid": "9424bf04-a069-5946-a191-01f26f059d5b", 00:15:38.150 "is_configured": true, 00:15:38.150 "data_offset": 2048, 00:15:38.150 "data_size": 63488 00:15:38.150 }, 00:15:38.150 { 00:15:38.150 "name": "BaseBdev2", 00:15:38.150 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:38.150 "is_configured": true, 00:15:38.150 "data_offset": 2048, 00:15:38.150 "data_size": 63488 00:15:38.150 }, 00:15:38.150 { 00:15:38.150 "name": "BaseBdev3", 00:15:38.150 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:38.150 "is_configured": true, 00:15:38.150 "data_offset": 2048, 00:15:38.150 "data_size": 63488 00:15:38.150 } 00:15:38.150 ] 00:15:38.150 }' 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.150 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.410 "name": "raid_bdev1", 00:15:38.410 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:38.410 "strip_size_kb": 64, 00:15:38.410 "state": "online", 00:15:38.410 "raid_level": "raid5f", 00:15:38.410 "superblock": true, 00:15:38.410 "num_base_bdevs": 3, 00:15:38.410 "num_base_bdevs_discovered": 3, 00:15:38.410 "num_base_bdevs_operational": 3, 00:15:38.410 "base_bdevs_list": [ 00:15:38.410 { 00:15:38.410 "name": "spare", 00:15:38.410 "uuid": "9424bf04-a069-5946-a191-01f26f059d5b", 00:15:38.410 "is_configured": true, 00:15:38.410 "data_offset": 2048, 00:15:38.410 "data_size": 63488 00:15:38.410 }, 00:15:38.410 { 00:15:38.410 "name": "BaseBdev2", 00:15:38.410 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:38.410 "is_configured": true, 00:15:38.410 "data_offset": 2048, 00:15:38.410 "data_size": 63488 00:15:38.410 }, 00:15:38.410 { 00:15:38.410 "name": "BaseBdev3", 00:15:38.410 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:38.410 "is_configured": true, 00:15:38.410 "data_offset": 2048, 00:15:38.410 "data_size": 63488 00:15:38.410 } 00:15:38.410 ] 00:15:38.410 }' 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.410 "name": "raid_bdev1", 00:15:38.410 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:38.410 "strip_size_kb": 64, 00:15:38.410 "state": "online", 00:15:38.410 "raid_level": "raid5f", 00:15:38.410 "superblock": true, 00:15:38.410 "num_base_bdevs": 3, 00:15:38.410 "num_base_bdevs_discovered": 3, 00:15:38.410 "num_base_bdevs_operational": 3, 00:15:38.410 "base_bdevs_list": [ 00:15:38.410 { 00:15:38.410 "name": "spare", 00:15:38.410 "uuid": "9424bf04-a069-5946-a191-01f26f059d5b", 00:15:38.410 "is_configured": true, 00:15:38.410 "data_offset": 2048, 00:15:38.410 "data_size": 63488 00:15:38.410 }, 00:15:38.410 { 00:15:38.410 "name": "BaseBdev2", 00:15:38.410 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:38.410 "is_configured": true, 00:15:38.410 "data_offset": 2048, 00:15:38.410 "data_size": 63488 00:15:38.410 }, 00:15:38.410 { 00:15:38.410 "name": "BaseBdev3", 00:15:38.410 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:38.410 "is_configured": true, 00:15:38.410 "data_offset": 2048, 00:15:38.410 "data_size": 63488 00:15:38.410 } 00:15:38.410 ] 00:15:38.410 }' 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.410 11:54:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.978 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:38.978 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.978 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.978 [2024-11-27 11:54:05.091262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:38.979 [2024-11-27 11:54:05.091363] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.979 [2024-11-27 11:54:05.091516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.979 [2024-11-27 11:54:05.091641] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.979 [2024-11-27 11:54:05.091712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:38.979 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:39.239 /dev/nbd0 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.239 1+0 records in 00:15:39.239 1+0 records out 00:15:39.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527099 s, 7.8 MB/s 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:39.239 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:39.499 /dev/nbd1 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.499 1+0 records in 00:15:39.499 1+0 records out 00:15:39.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390469 s, 10.5 MB/s 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:39.499 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:39.758 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:39.758 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:39.758 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:39.758 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:39.758 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:39.758 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.758 11:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:40.017 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:40.017 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:40.017 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:40.017 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.017 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.017 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:40.017 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:40.017 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.017 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.017 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.277 [2024-11-27 11:54:06.459165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:40.277 [2024-11-27 11:54:06.459248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.277 [2024-11-27 11:54:06.459278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:40.277 [2024-11-27 11:54:06.459293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.277 [2024-11-27 11:54:06.462154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.277 [2024-11-27 11:54:06.462203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:40.277 [2024-11-27 11:54:06.462313] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:40.277 [2024-11-27 11:54:06.462378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.277 [2024-11-27 11:54:06.462574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.277 [2024-11-27 11:54:06.462698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.277 spare 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.277 [2024-11-27 11:54:06.562643] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:40.277 [2024-11-27 11:54:06.562696] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:40.277 [2024-11-27 11:54:06.563117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:40.277 [2024-11-27 11:54:06.570538] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:40.277 [2024-11-27 11:54:06.570566] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:40.277 [2024-11-27 11:54:06.570855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.277 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.277 "name": "raid_bdev1", 00:15:40.277 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:40.277 "strip_size_kb": 64, 00:15:40.277 "state": "online", 00:15:40.277 "raid_level": "raid5f", 00:15:40.277 "superblock": true, 00:15:40.277 "num_base_bdevs": 3, 00:15:40.277 "num_base_bdevs_discovered": 3, 00:15:40.278 "num_base_bdevs_operational": 3, 00:15:40.278 "base_bdevs_list": [ 00:15:40.278 { 00:15:40.278 "name": "spare", 00:15:40.278 "uuid": "9424bf04-a069-5946-a191-01f26f059d5b", 00:15:40.278 "is_configured": true, 00:15:40.278 "data_offset": 2048, 00:15:40.278 "data_size": 63488 00:15:40.278 }, 00:15:40.278 { 00:15:40.278 "name": "BaseBdev2", 00:15:40.278 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:40.278 "is_configured": true, 00:15:40.278 "data_offset": 2048, 00:15:40.278 "data_size": 63488 00:15:40.278 }, 00:15:40.278 { 00:15:40.278 "name": "BaseBdev3", 00:15:40.278 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:40.278 "is_configured": true, 00:15:40.278 "data_offset": 2048, 00:15:40.278 "data_size": 63488 00:15:40.278 } 00:15:40.278 ] 00:15:40.278 }' 00:15:40.278 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.278 11:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.846 "name": "raid_bdev1", 00:15:40.846 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:40.846 "strip_size_kb": 64, 00:15:40.846 "state": "online", 00:15:40.846 "raid_level": "raid5f", 00:15:40.846 "superblock": true, 00:15:40.846 "num_base_bdevs": 3, 00:15:40.846 "num_base_bdevs_discovered": 3, 00:15:40.846 "num_base_bdevs_operational": 3, 00:15:40.846 "base_bdevs_list": [ 00:15:40.846 { 00:15:40.846 "name": "spare", 00:15:40.846 "uuid": "9424bf04-a069-5946-a191-01f26f059d5b", 00:15:40.846 "is_configured": true, 00:15:40.846 "data_offset": 2048, 00:15:40.846 "data_size": 63488 00:15:40.846 }, 00:15:40.846 { 00:15:40.846 "name": "BaseBdev2", 00:15:40.846 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:40.846 "is_configured": true, 00:15:40.846 "data_offset": 2048, 00:15:40.846 "data_size": 63488 00:15:40.846 }, 00:15:40.846 { 00:15:40.846 "name": "BaseBdev3", 00:15:40.846 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:40.846 "is_configured": true, 00:15:40.846 "data_offset": 2048, 00:15:40.846 "data_size": 63488 00:15:40.846 } 00:15:40.846 ] 00:15:40.846 }' 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.846 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.105 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.105 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:41.105 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.105 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.105 [2024-11-27 11:54:07.238346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.106 "name": "raid_bdev1", 00:15:41.106 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:41.106 "strip_size_kb": 64, 00:15:41.106 "state": "online", 00:15:41.106 "raid_level": "raid5f", 00:15:41.106 "superblock": true, 00:15:41.106 "num_base_bdevs": 3, 00:15:41.106 "num_base_bdevs_discovered": 2, 00:15:41.106 "num_base_bdevs_operational": 2, 00:15:41.106 "base_bdevs_list": [ 00:15:41.106 { 00:15:41.106 "name": null, 00:15:41.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.106 "is_configured": false, 00:15:41.106 "data_offset": 0, 00:15:41.106 "data_size": 63488 00:15:41.106 }, 00:15:41.106 { 00:15:41.106 "name": "BaseBdev2", 00:15:41.106 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:41.106 "is_configured": true, 00:15:41.106 "data_offset": 2048, 00:15:41.106 "data_size": 63488 00:15:41.106 }, 00:15:41.106 { 00:15:41.106 "name": "BaseBdev3", 00:15:41.106 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:41.106 "is_configured": true, 00:15:41.106 "data_offset": 2048, 00:15:41.106 "data_size": 63488 00:15:41.106 } 00:15:41.106 ] 00:15:41.106 }' 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.106 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.364 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:41.364 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.365 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.365 [2024-11-27 11:54:07.673696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.365 [2024-11-27 11:54:07.673996] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:41.365 [2024-11-27 11:54:07.674081] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:41.365 [2024-11-27 11:54:07.674154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.365 [2024-11-27 11:54:07.694358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:41.365 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.365 11:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:41.365 [2024-11-27 11:54:07.704354] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.740 "name": "raid_bdev1", 00:15:42.740 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:42.740 "strip_size_kb": 64, 00:15:42.740 "state": "online", 00:15:42.740 "raid_level": "raid5f", 00:15:42.740 "superblock": true, 00:15:42.740 "num_base_bdevs": 3, 00:15:42.740 "num_base_bdevs_discovered": 3, 00:15:42.740 "num_base_bdevs_operational": 3, 00:15:42.740 "process": { 00:15:42.740 "type": "rebuild", 00:15:42.740 "target": "spare", 00:15:42.740 "progress": { 00:15:42.740 "blocks": 20480, 00:15:42.740 "percent": 16 00:15:42.740 } 00:15:42.740 }, 00:15:42.740 "base_bdevs_list": [ 00:15:42.740 { 00:15:42.740 "name": "spare", 00:15:42.740 "uuid": "9424bf04-a069-5946-a191-01f26f059d5b", 00:15:42.740 "is_configured": true, 00:15:42.740 "data_offset": 2048, 00:15:42.740 "data_size": 63488 00:15:42.740 }, 00:15:42.740 { 00:15:42.740 "name": "BaseBdev2", 00:15:42.740 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:42.740 "is_configured": true, 00:15:42.740 "data_offset": 2048, 00:15:42.740 "data_size": 63488 00:15:42.740 }, 00:15:42.740 { 00:15:42.740 "name": "BaseBdev3", 00:15:42.740 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:42.740 "is_configured": true, 00:15:42.740 "data_offset": 2048, 00:15:42.740 "data_size": 63488 00:15:42.740 } 00:15:42.740 ] 00:15:42.740 }' 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.740 [2024-11-27 11:54:08.860776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.740 [2024-11-27 11:54:08.916870] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:42.740 [2024-11-27 11:54:08.917035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.740 [2024-11-27 11:54:08.917060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.740 [2024-11-27 11:54:08.917073] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.740 11:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.740 11:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.740 "name": "raid_bdev1", 00:15:42.740 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:42.740 "strip_size_kb": 64, 00:15:42.740 "state": "online", 00:15:42.740 "raid_level": "raid5f", 00:15:42.740 "superblock": true, 00:15:42.740 "num_base_bdevs": 3, 00:15:42.740 "num_base_bdevs_discovered": 2, 00:15:42.740 "num_base_bdevs_operational": 2, 00:15:42.740 "base_bdevs_list": [ 00:15:42.740 { 00:15:42.740 "name": null, 00:15:42.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.740 "is_configured": false, 00:15:42.740 "data_offset": 0, 00:15:42.740 "data_size": 63488 00:15:42.740 }, 00:15:42.740 { 00:15:42.740 "name": "BaseBdev2", 00:15:42.740 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:42.740 "is_configured": true, 00:15:42.740 "data_offset": 2048, 00:15:42.740 "data_size": 63488 00:15:42.740 }, 00:15:42.740 { 00:15:42.740 "name": "BaseBdev3", 00:15:42.740 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:42.740 "is_configured": true, 00:15:42.740 "data_offset": 2048, 00:15:42.740 "data_size": 63488 00:15:42.740 } 00:15:42.740 ] 00:15:42.740 }' 00:15:42.740 11:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.740 11:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.311 11:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:43.311 11:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.311 11:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.311 [2024-11-27 11:54:09.403999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:43.311 [2024-11-27 11:54:09.404137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.311 [2024-11-27 11:54:09.404192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:43.311 [2024-11-27 11:54:09.404239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.311 [2024-11-27 11:54:09.404922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.311 [2024-11-27 11:54:09.405005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:43.311 [2024-11-27 11:54:09.405164] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:43.311 [2024-11-27 11:54:09.405224] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:43.311 [2024-11-27 11:54:09.405280] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:43.311 [2024-11-27 11:54:09.405339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.311 [2024-11-27 11:54:09.425711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:43.311 spare 00:15:43.311 11:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.311 11:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:43.311 [2024-11-27 11:54:09.435456] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:44.248 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.248 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.248 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.248 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.248 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.248 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.248 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.248 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.248 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.248 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.248 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.248 "name": "raid_bdev1", 00:15:44.248 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:44.248 "strip_size_kb": 64, 00:15:44.249 "state": "online", 00:15:44.249 "raid_level": "raid5f", 00:15:44.249 "superblock": true, 00:15:44.249 "num_base_bdevs": 3, 00:15:44.249 "num_base_bdevs_discovered": 3, 00:15:44.249 "num_base_bdevs_operational": 3, 00:15:44.249 "process": { 00:15:44.249 "type": "rebuild", 00:15:44.249 "target": "spare", 00:15:44.249 "progress": { 00:15:44.249 "blocks": 20480, 00:15:44.249 "percent": 16 00:15:44.249 } 00:15:44.249 }, 00:15:44.249 "base_bdevs_list": [ 00:15:44.249 { 00:15:44.249 "name": "spare", 00:15:44.249 "uuid": "9424bf04-a069-5946-a191-01f26f059d5b", 00:15:44.249 "is_configured": true, 00:15:44.249 "data_offset": 2048, 00:15:44.249 "data_size": 63488 00:15:44.249 }, 00:15:44.249 { 00:15:44.249 "name": "BaseBdev2", 00:15:44.249 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:44.249 "is_configured": true, 00:15:44.249 "data_offset": 2048, 00:15:44.249 "data_size": 63488 00:15:44.249 }, 00:15:44.249 { 00:15:44.249 "name": "BaseBdev3", 00:15:44.249 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:44.249 "is_configured": true, 00:15:44.249 "data_offset": 2048, 00:15:44.249 "data_size": 63488 00:15:44.249 } 00:15:44.249 ] 00:15:44.249 }' 00:15:44.249 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.249 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.249 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.249 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.249 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:44.249 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.249 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.249 [2024-11-27 11:54:10.587618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.507 [2024-11-27 11:54:10.647762] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:44.507 [2024-11-27 11:54:10.647981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.507 [2024-11-27 11:54:10.648044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.507 [2024-11-27 11:54:10.648091] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.507 "name": "raid_bdev1", 00:15:44.507 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:44.507 "strip_size_kb": 64, 00:15:44.507 "state": "online", 00:15:44.507 "raid_level": "raid5f", 00:15:44.507 "superblock": true, 00:15:44.507 "num_base_bdevs": 3, 00:15:44.507 "num_base_bdevs_discovered": 2, 00:15:44.507 "num_base_bdevs_operational": 2, 00:15:44.507 "base_bdevs_list": [ 00:15:44.507 { 00:15:44.507 "name": null, 00:15:44.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.507 "is_configured": false, 00:15:44.507 "data_offset": 0, 00:15:44.507 "data_size": 63488 00:15:44.507 }, 00:15:44.507 { 00:15:44.507 "name": "BaseBdev2", 00:15:44.507 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:44.507 "is_configured": true, 00:15:44.507 "data_offset": 2048, 00:15:44.507 "data_size": 63488 00:15:44.507 }, 00:15:44.507 { 00:15:44.507 "name": "BaseBdev3", 00:15:44.507 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:44.507 "is_configured": true, 00:15:44.507 "data_offset": 2048, 00:15:44.507 "data_size": 63488 00:15:44.507 } 00:15:44.507 ] 00:15:44.507 }' 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.507 11:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.807 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:44.808 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.808 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:44.808 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:44.808 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.808 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.808 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.808 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.808 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.808 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.073 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.073 "name": "raid_bdev1", 00:15:45.073 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:45.073 "strip_size_kb": 64, 00:15:45.073 "state": "online", 00:15:45.073 "raid_level": "raid5f", 00:15:45.073 "superblock": true, 00:15:45.073 "num_base_bdevs": 3, 00:15:45.073 "num_base_bdevs_discovered": 2, 00:15:45.073 "num_base_bdevs_operational": 2, 00:15:45.073 "base_bdevs_list": [ 00:15:45.073 { 00:15:45.073 "name": null, 00:15:45.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.073 "is_configured": false, 00:15:45.073 "data_offset": 0, 00:15:45.073 "data_size": 63488 00:15:45.073 }, 00:15:45.073 { 00:15:45.073 "name": "BaseBdev2", 00:15:45.073 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:45.073 "is_configured": true, 00:15:45.073 "data_offset": 2048, 00:15:45.073 "data_size": 63488 00:15:45.073 }, 00:15:45.073 { 00:15:45.073 "name": "BaseBdev3", 00:15:45.073 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:45.073 "is_configured": true, 00:15:45.073 "data_offset": 2048, 00:15:45.073 "data_size": 63488 00:15:45.073 } 00:15:45.073 ] 00:15:45.073 }' 00:15:45.073 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.073 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:45.073 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.073 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:45.073 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:45.073 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.073 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.073 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.073 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:45.073 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.073 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.073 [2024-11-27 11:54:11.310157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:45.073 [2024-11-27 11:54:11.310231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.073 [2024-11-27 11:54:11.310263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:45.073 [2024-11-27 11:54:11.310275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.073 [2024-11-27 11:54:11.310903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.073 [2024-11-27 11:54:11.310936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:45.073 [2024-11-27 11:54:11.311039] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:45.073 [2024-11-27 11:54:11.311059] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:45.073 [2024-11-27 11:54:11.311086] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:45.073 [2024-11-27 11:54:11.311100] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:45.073 BaseBdev1 00:15:45.073 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.073 11:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:46.009 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:46.009 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.009 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.009 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.009 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.009 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:46.009 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.009 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.009 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.009 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.009 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.009 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.009 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.009 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.009 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.009 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.009 "name": "raid_bdev1", 00:15:46.009 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:46.009 "strip_size_kb": 64, 00:15:46.009 "state": "online", 00:15:46.009 "raid_level": "raid5f", 00:15:46.009 "superblock": true, 00:15:46.010 "num_base_bdevs": 3, 00:15:46.010 "num_base_bdevs_discovered": 2, 00:15:46.010 "num_base_bdevs_operational": 2, 00:15:46.010 "base_bdevs_list": [ 00:15:46.010 { 00:15:46.010 "name": null, 00:15:46.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.010 "is_configured": false, 00:15:46.010 "data_offset": 0, 00:15:46.010 "data_size": 63488 00:15:46.010 }, 00:15:46.010 { 00:15:46.010 "name": "BaseBdev2", 00:15:46.010 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:46.010 "is_configured": true, 00:15:46.010 "data_offset": 2048, 00:15:46.010 "data_size": 63488 00:15:46.010 }, 00:15:46.010 { 00:15:46.010 "name": "BaseBdev3", 00:15:46.010 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:46.010 "is_configured": true, 00:15:46.010 "data_offset": 2048, 00:15:46.010 "data_size": 63488 00:15:46.010 } 00:15:46.010 ] 00:15:46.010 }' 00:15:46.010 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.010 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.578 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.578 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.578 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.578 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.578 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.578 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.578 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.578 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.578 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.578 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.578 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.578 "name": "raid_bdev1", 00:15:46.578 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:46.578 "strip_size_kb": 64, 00:15:46.578 "state": "online", 00:15:46.578 "raid_level": "raid5f", 00:15:46.578 "superblock": true, 00:15:46.578 "num_base_bdevs": 3, 00:15:46.578 "num_base_bdevs_discovered": 2, 00:15:46.578 "num_base_bdevs_operational": 2, 00:15:46.578 "base_bdevs_list": [ 00:15:46.578 { 00:15:46.578 "name": null, 00:15:46.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.578 "is_configured": false, 00:15:46.578 "data_offset": 0, 00:15:46.578 "data_size": 63488 00:15:46.578 }, 00:15:46.578 { 00:15:46.578 "name": "BaseBdev2", 00:15:46.578 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:46.578 "is_configured": true, 00:15:46.578 "data_offset": 2048, 00:15:46.578 "data_size": 63488 00:15:46.578 }, 00:15:46.578 { 00:15:46.578 "name": "BaseBdev3", 00:15:46.578 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:46.578 "is_configured": true, 00:15:46.578 "data_offset": 2048, 00:15:46.578 "data_size": 63488 00:15:46.578 } 00:15:46.578 ] 00:15:46.578 }' 00:15:46.578 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.578 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.578 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.838 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:46.838 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:46.838 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:46.838 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:46.838 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:46.838 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:46.838 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:46.838 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:46.838 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:46.838 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.838 11:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.838 [2024-11-27 11:54:13.004078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.838 [2024-11-27 11:54:13.004349] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:46.838 [2024-11-27 11:54:13.004427] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:46.838 request: 00:15:46.838 { 00:15:46.838 "base_bdev": "BaseBdev1", 00:15:46.838 "raid_bdev": "raid_bdev1", 00:15:46.838 "method": "bdev_raid_add_base_bdev", 00:15:46.838 "req_id": 1 00:15:46.838 } 00:15:46.838 Got JSON-RPC error response 00:15:46.838 response: 00:15:46.838 { 00:15:46.838 "code": -22, 00:15:46.838 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:46.838 } 00:15:46.838 11:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:46.838 11:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:46.838 11:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:46.838 11:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:46.838 11:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:46.838 11:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:47.777 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:47.777 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.777 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.777 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.777 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.777 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:47.777 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.777 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.777 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.777 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.777 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.777 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.777 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.777 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.777 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.777 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.777 "name": "raid_bdev1", 00:15:47.777 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:47.777 "strip_size_kb": 64, 00:15:47.777 "state": "online", 00:15:47.777 "raid_level": "raid5f", 00:15:47.777 "superblock": true, 00:15:47.777 "num_base_bdevs": 3, 00:15:47.777 "num_base_bdevs_discovered": 2, 00:15:47.777 "num_base_bdevs_operational": 2, 00:15:47.777 "base_bdevs_list": [ 00:15:47.777 { 00:15:47.777 "name": null, 00:15:47.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.777 "is_configured": false, 00:15:47.777 "data_offset": 0, 00:15:47.777 "data_size": 63488 00:15:47.777 }, 00:15:47.777 { 00:15:47.777 "name": "BaseBdev2", 00:15:47.777 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:47.777 "is_configured": true, 00:15:47.777 "data_offset": 2048, 00:15:47.777 "data_size": 63488 00:15:47.777 }, 00:15:47.777 { 00:15:47.777 "name": "BaseBdev3", 00:15:47.777 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:47.777 "is_configured": true, 00:15:47.777 "data_offset": 2048, 00:15:47.777 "data_size": 63488 00:15:47.777 } 00:15:47.777 ] 00:15:47.777 }' 00:15:47.777 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.777 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.345 "name": "raid_bdev1", 00:15:48.345 "uuid": "62e6b6f1-1e92-4f9b-8fcd-5981eee55bd7", 00:15:48.345 "strip_size_kb": 64, 00:15:48.345 "state": "online", 00:15:48.345 "raid_level": "raid5f", 00:15:48.345 "superblock": true, 00:15:48.345 "num_base_bdevs": 3, 00:15:48.345 "num_base_bdevs_discovered": 2, 00:15:48.345 "num_base_bdevs_operational": 2, 00:15:48.345 "base_bdevs_list": [ 00:15:48.345 { 00:15:48.345 "name": null, 00:15:48.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.345 "is_configured": false, 00:15:48.345 "data_offset": 0, 00:15:48.345 "data_size": 63488 00:15:48.345 }, 00:15:48.345 { 00:15:48.345 "name": "BaseBdev2", 00:15:48.345 "uuid": "5715dec5-e011-5e06-97d6-97ea37f3effb", 00:15:48.345 "is_configured": true, 00:15:48.345 "data_offset": 2048, 00:15:48.345 "data_size": 63488 00:15:48.345 }, 00:15:48.345 { 00:15:48.345 "name": "BaseBdev3", 00:15:48.345 "uuid": "2d24eabe-1035-59be-8749-e0c3cfdef1f0", 00:15:48.345 "is_configured": true, 00:15:48.345 "data_offset": 2048, 00:15:48.345 "data_size": 63488 00:15:48.345 } 00:15:48.345 ] 00:15:48.345 }' 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82063 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82063 ']' 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82063 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82063 00:15:48.345 killing process with pid 82063 00:15:48.345 Received shutdown signal, test time was about 60.000000 seconds 00:15:48.345 00:15:48.345 Latency(us) 00:15:48.345 [2024-11-27T11:54:14.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.345 [2024-11-27T11:54:14.730Z] =================================================================================================================== 00:15:48.345 [2024-11-27T11:54:14.730Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82063' 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82063 00:15:48.345 [2024-11-27 11:54:14.593104] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:48.345 11:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82063 00:15:48.345 [2024-11-27 11:54:14.593256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.345 [2024-11-27 11:54:14.593336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.345 [2024-11-27 11:54:14.593353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:48.914 [2024-11-27 11:54:15.082258] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:50.292 ************************************ 00:15:50.292 END TEST raid5f_rebuild_test_sb 00:15:50.292 ************************************ 00:15:50.292 11:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:50.292 00:15:50.292 real 0m24.328s 00:15:50.292 user 0m31.124s 00:15:50.292 sys 0m3.003s 00:15:50.292 11:54:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.292 11:54:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.292 11:54:16 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:50.292 11:54:16 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:50.292 11:54:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:50.292 11:54:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.292 11:54:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:50.292 ************************************ 00:15:50.292 START TEST raid5f_state_function_test 00:15:50.292 ************************************ 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82826 00:15:50.292 Process raid pid: 82826 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82826' 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82826 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82826 ']' 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.292 11:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.292 [2024-11-27 11:54:16.633035] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:15:50.292 [2024-11-27 11:54:16.633167] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.551 [2024-11-27 11:54:16.816584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.810 [2024-11-27 11:54:16.953545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.070 [2024-11-27 11:54:17.200161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.070 [2024-11-27 11:54:17.200322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.329 11:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.329 11:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:51.329 11:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:51.329 11:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.329 11:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.329 [2024-11-27 11:54:17.535578] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:51.329 [2024-11-27 11:54:17.535635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:51.329 [2024-11-27 11:54:17.535649] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.329 [2024-11-27 11:54:17.535661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.329 [2024-11-27 11:54:17.535669] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:51.329 [2024-11-27 11:54:17.535679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:51.329 [2024-11-27 11:54:17.535687] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:51.329 [2024-11-27 11:54:17.535698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:51.329 11:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.329 11:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:51.329 11:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.330 11:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.330 11:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.330 11:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.330 11:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.330 11:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.330 11:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.330 11:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.330 11:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.330 11:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.330 11:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.330 11:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.330 11:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.330 11:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.330 11:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.330 "name": "Existed_Raid", 00:15:51.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.330 "strip_size_kb": 64, 00:15:51.330 "state": "configuring", 00:15:51.330 "raid_level": "raid5f", 00:15:51.330 "superblock": false, 00:15:51.330 "num_base_bdevs": 4, 00:15:51.330 "num_base_bdevs_discovered": 0, 00:15:51.330 "num_base_bdevs_operational": 4, 00:15:51.330 "base_bdevs_list": [ 00:15:51.330 { 00:15:51.330 "name": "BaseBdev1", 00:15:51.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.330 "is_configured": false, 00:15:51.330 "data_offset": 0, 00:15:51.330 "data_size": 0 00:15:51.330 }, 00:15:51.330 { 00:15:51.330 "name": "BaseBdev2", 00:15:51.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.330 "is_configured": false, 00:15:51.330 "data_offset": 0, 00:15:51.330 "data_size": 0 00:15:51.330 }, 00:15:51.330 { 00:15:51.330 "name": "BaseBdev3", 00:15:51.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.330 "is_configured": false, 00:15:51.330 "data_offset": 0, 00:15:51.330 "data_size": 0 00:15:51.330 }, 00:15:51.330 { 00:15:51.330 "name": "BaseBdev4", 00:15:51.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.330 "is_configured": false, 00:15:51.330 "data_offset": 0, 00:15:51.330 "data_size": 0 00:15:51.330 } 00:15:51.330 ] 00:15:51.330 }' 00:15:51.330 11:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.330 11:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.898 [2024-11-27 11:54:18.034790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:51.898 [2024-11-27 11:54:18.034859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.898 [2024-11-27 11:54:18.042760] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:51.898 [2024-11-27 11:54:18.042812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:51.898 [2024-11-27 11:54:18.042824] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.898 [2024-11-27 11:54:18.042846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.898 [2024-11-27 11:54:18.042854] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:51.898 [2024-11-27 11:54:18.042866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:51.898 [2024-11-27 11:54:18.042873] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:51.898 [2024-11-27 11:54:18.042884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.898 [2024-11-27 11:54:18.093149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.898 BaseBdev1 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.898 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.899 [ 00:15:51.899 { 00:15:51.899 "name": "BaseBdev1", 00:15:51.899 "aliases": [ 00:15:51.899 "3b4f0faf-1716-41df-8f96-c3e1a3b0079f" 00:15:51.899 ], 00:15:51.899 "product_name": "Malloc disk", 00:15:51.899 "block_size": 512, 00:15:51.899 "num_blocks": 65536, 00:15:51.899 "uuid": "3b4f0faf-1716-41df-8f96-c3e1a3b0079f", 00:15:51.899 "assigned_rate_limits": { 00:15:51.899 "rw_ios_per_sec": 0, 00:15:51.899 "rw_mbytes_per_sec": 0, 00:15:51.899 "r_mbytes_per_sec": 0, 00:15:51.899 "w_mbytes_per_sec": 0 00:15:51.899 }, 00:15:51.899 "claimed": true, 00:15:51.899 "claim_type": "exclusive_write", 00:15:51.899 "zoned": false, 00:15:51.899 "supported_io_types": { 00:15:51.899 "read": true, 00:15:51.899 "write": true, 00:15:51.899 "unmap": true, 00:15:51.899 "flush": true, 00:15:51.899 "reset": true, 00:15:51.899 "nvme_admin": false, 00:15:51.899 "nvme_io": false, 00:15:51.899 "nvme_io_md": false, 00:15:51.899 "write_zeroes": true, 00:15:51.899 "zcopy": true, 00:15:51.899 "get_zone_info": false, 00:15:51.899 "zone_management": false, 00:15:51.899 "zone_append": false, 00:15:51.899 "compare": false, 00:15:51.899 "compare_and_write": false, 00:15:51.899 "abort": true, 00:15:51.899 "seek_hole": false, 00:15:51.899 "seek_data": false, 00:15:51.899 "copy": true, 00:15:51.899 "nvme_iov_md": false 00:15:51.899 }, 00:15:51.899 "memory_domains": [ 00:15:51.899 { 00:15:51.899 "dma_device_id": "system", 00:15:51.899 "dma_device_type": 1 00:15:51.899 }, 00:15:51.899 { 00:15:51.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.899 "dma_device_type": 2 00:15:51.899 } 00:15:51.899 ], 00:15:51.899 "driver_specific": {} 00:15:51.899 } 00:15:51.899 ] 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.899 "name": "Existed_Raid", 00:15:51.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.899 "strip_size_kb": 64, 00:15:51.899 "state": "configuring", 00:15:51.899 "raid_level": "raid5f", 00:15:51.899 "superblock": false, 00:15:51.899 "num_base_bdevs": 4, 00:15:51.899 "num_base_bdevs_discovered": 1, 00:15:51.899 "num_base_bdevs_operational": 4, 00:15:51.899 "base_bdevs_list": [ 00:15:51.899 { 00:15:51.899 "name": "BaseBdev1", 00:15:51.899 "uuid": "3b4f0faf-1716-41df-8f96-c3e1a3b0079f", 00:15:51.899 "is_configured": true, 00:15:51.899 "data_offset": 0, 00:15:51.899 "data_size": 65536 00:15:51.899 }, 00:15:51.899 { 00:15:51.899 "name": "BaseBdev2", 00:15:51.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.899 "is_configured": false, 00:15:51.899 "data_offset": 0, 00:15:51.899 "data_size": 0 00:15:51.899 }, 00:15:51.899 { 00:15:51.899 "name": "BaseBdev3", 00:15:51.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.899 "is_configured": false, 00:15:51.899 "data_offset": 0, 00:15:51.899 "data_size": 0 00:15:51.899 }, 00:15:51.899 { 00:15:51.899 "name": "BaseBdev4", 00:15:51.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.899 "is_configured": false, 00:15:51.899 "data_offset": 0, 00:15:51.899 "data_size": 0 00:15:51.899 } 00:15:51.899 ] 00:15:51.899 }' 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.899 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.468 [2024-11-27 11:54:18.596532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:52.468 [2024-11-27 11:54:18.596661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.468 [2024-11-27 11:54:18.608580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:52.468 [2024-11-27 11:54:18.610785] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:52.468 [2024-11-27 11:54:18.610903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:52.468 [2024-11-27 11:54:18.610949] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:52.468 [2024-11-27 11:54:18.611001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:52.468 [2024-11-27 11:54:18.611038] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:52.468 [2024-11-27 11:54:18.611081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.468 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.468 "name": "Existed_Raid", 00:15:52.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.468 "strip_size_kb": 64, 00:15:52.468 "state": "configuring", 00:15:52.468 "raid_level": "raid5f", 00:15:52.468 "superblock": false, 00:15:52.468 "num_base_bdevs": 4, 00:15:52.468 "num_base_bdevs_discovered": 1, 00:15:52.468 "num_base_bdevs_operational": 4, 00:15:52.468 "base_bdevs_list": [ 00:15:52.468 { 00:15:52.468 "name": "BaseBdev1", 00:15:52.468 "uuid": "3b4f0faf-1716-41df-8f96-c3e1a3b0079f", 00:15:52.468 "is_configured": true, 00:15:52.468 "data_offset": 0, 00:15:52.468 "data_size": 65536 00:15:52.468 }, 00:15:52.468 { 00:15:52.468 "name": "BaseBdev2", 00:15:52.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.468 "is_configured": false, 00:15:52.468 "data_offset": 0, 00:15:52.468 "data_size": 0 00:15:52.468 }, 00:15:52.468 { 00:15:52.468 "name": "BaseBdev3", 00:15:52.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.468 "is_configured": false, 00:15:52.468 "data_offset": 0, 00:15:52.468 "data_size": 0 00:15:52.468 }, 00:15:52.468 { 00:15:52.468 "name": "BaseBdev4", 00:15:52.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.468 "is_configured": false, 00:15:52.468 "data_offset": 0, 00:15:52.468 "data_size": 0 00:15:52.468 } 00:15:52.469 ] 00:15:52.469 }' 00:15:52.469 11:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.469 11:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.728 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:52.728 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.728 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.728 [2024-11-27 11:54:19.100663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:52.728 BaseBdev2 00:15:52.728 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.728 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:52.728 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:52.728 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:52.728 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:52.728 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:52.728 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:52.728 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:52.728 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.728 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.996 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.996 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:52.996 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.996 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.996 [ 00:15:52.996 { 00:15:52.996 "name": "BaseBdev2", 00:15:52.996 "aliases": [ 00:15:52.996 "454c571f-8a7b-42d1-9ce8-7371a2534c92" 00:15:52.996 ], 00:15:52.996 "product_name": "Malloc disk", 00:15:52.996 "block_size": 512, 00:15:52.996 "num_blocks": 65536, 00:15:52.996 "uuid": "454c571f-8a7b-42d1-9ce8-7371a2534c92", 00:15:52.996 "assigned_rate_limits": { 00:15:52.996 "rw_ios_per_sec": 0, 00:15:52.997 "rw_mbytes_per_sec": 0, 00:15:52.997 "r_mbytes_per_sec": 0, 00:15:52.997 "w_mbytes_per_sec": 0 00:15:52.997 }, 00:15:52.997 "claimed": true, 00:15:52.997 "claim_type": "exclusive_write", 00:15:52.997 "zoned": false, 00:15:52.997 "supported_io_types": { 00:15:52.997 "read": true, 00:15:52.997 "write": true, 00:15:52.997 "unmap": true, 00:15:52.997 "flush": true, 00:15:52.997 "reset": true, 00:15:52.997 "nvme_admin": false, 00:15:52.997 "nvme_io": false, 00:15:52.997 "nvme_io_md": false, 00:15:52.997 "write_zeroes": true, 00:15:52.997 "zcopy": true, 00:15:52.997 "get_zone_info": false, 00:15:52.997 "zone_management": false, 00:15:52.997 "zone_append": false, 00:15:52.997 "compare": false, 00:15:52.997 "compare_and_write": false, 00:15:52.997 "abort": true, 00:15:52.997 "seek_hole": false, 00:15:52.997 "seek_data": false, 00:15:52.997 "copy": true, 00:15:52.997 "nvme_iov_md": false 00:15:52.997 }, 00:15:52.997 "memory_domains": [ 00:15:52.997 { 00:15:52.997 "dma_device_id": "system", 00:15:52.997 "dma_device_type": 1 00:15:52.997 }, 00:15:52.997 { 00:15:52.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.997 "dma_device_type": 2 00:15:52.997 } 00:15:52.997 ], 00:15:52.997 "driver_specific": {} 00:15:52.997 } 00:15:52.997 ] 00:15:52.997 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.997 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:52.997 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:52.997 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:52.997 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:52.997 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.997 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.997 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.997 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.997 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.997 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.997 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.997 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.997 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.997 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.997 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.997 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.997 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.998 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.998 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.998 "name": "Existed_Raid", 00:15:52.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.998 "strip_size_kb": 64, 00:15:52.998 "state": "configuring", 00:15:52.998 "raid_level": "raid5f", 00:15:52.998 "superblock": false, 00:15:52.998 "num_base_bdevs": 4, 00:15:52.998 "num_base_bdevs_discovered": 2, 00:15:52.998 "num_base_bdevs_operational": 4, 00:15:52.998 "base_bdevs_list": [ 00:15:52.998 { 00:15:52.998 "name": "BaseBdev1", 00:15:52.998 "uuid": "3b4f0faf-1716-41df-8f96-c3e1a3b0079f", 00:15:52.998 "is_configured": true, 00:15:52.998 "data_offset": 0, 00:15:52.998 "data_size": 65536 00:15:52.998 }, 00:15:52.998 { 00:15:52.998 "name": "BaseBdev2", 00:15:52.998 "uuid": "454c571f-8a7b-42d1-9ce8-7371a2534c92", 00:15:52.998 "is_configured": true, 00:15:52.998 "data_offset": 0, 00:15:52.998 "data_size": 65536 00:15:52.998 }, 00:15:52.998 { 00:15:52.998 "name": "BaseBdev3", 00:15:52.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.998 "is_configured": false, 00:15:52.998 "data_offset": 0, 00:15:52.998 "data_size": 0 00:15:52.998 }, 00:15:52.998 { 00:15:52.998 "name": "BaseBdev4", 00:15:52.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.998 "is_configured": false, 00:15:52.998 "data_offset": 0, 00:15:52.998 "data_size": 0 00:15:52.998 } 00:15:52.998 ] 00:15:52.998 }' 00:15:52.998 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.998 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.261 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:53.261 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.261 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.521 [2024-11-27 11:54:19.663169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:53.521 BaseBdev3 00:15:53.521 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.521 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:53.521 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:53.521 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:53.521 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:53.521 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:53.521 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:53.521 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:53.521 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.521 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.521 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.521 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:53.521 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.521 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.521 [ 00:15:53.521 { 00:15:53.521 "name": "BaseBdev3", 00:15:53.521 "aliases": [ 00:15:53.521 "d59c9b1d-08ef-48a7-af5c-c66843df5350" 00:15:53.521 ], 00:15:53.521 "product_name": "Malloc disk", 00:15:53.521 "block_size": 512, 00:15:53.521 "num_blocks": 65536, 00:15:53.521 "uuid": "d59c9b1d-08ef-48a7-af5c-c66843df5350", 00:15:53.521 "assigned_rate_limits": { 00:15:53.521 "rw_ios_per_sec": 0, 00:15:53.521 "rw_mbytes_per_sec": 0, 00:15:53.521 "r_mbytes_per_sec": 0, 00:15:53.521 "w_mbytes_per_sec": 0 00:15:53.521 }, 00:15:53.521 "claimed": true, 00:15:53.521 "claim_type": "exclusive_write", 00:15:53.521 "zoned": false, 00:15:53.522 "supported_io_types": { 00:15:53.522 "read": true, 00:15:53.522 "write": true, 00:15:53.522 "unmap": true, 00:15:53.522 "flush": true, 00:15:53.522 "reset": true, 00:15:53.522 "nvme_admin": false, 00:15:53.522 "nvme_io": false, 00:15:53.522 "nvme_io_md": false, 00:15:53.522 "write_zeroes": true, 00:15:53.522 "zcopy": true, 00:15:53.522 "get_zone_info": false, 00:15:53.522 "zone_management": false, 00:15:53.522 "zone_append": false, 00:15:53.522 "compare": false, 00:15:53.522 "compare_and_write": false, 00:15:53.522 "abort": true, 00:15:53.522 "seek_hole": false, 00:15:53.522 "seek_data": false, 00:15:53.522 "copy": true, 00:15:53.522 "nvme_iov_md": false 00:15:53.522 }, 00:15:53.522 "memory_domains": [ 00:15:53.522 { 00:15:53.522 "dma_device_id": "system", 00:15:53.522 "dma_device_type": 1 00:15:53.522 }, 00:15:53.522 { 00:15:53.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.522 "dma_device_type": 2 00:15:53.522 } 00:15:53.522 ], 00:15:53.522 "driver_specific": {} 00:15:53.522 } 00:15:53.522 ] 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.522 "name": "Existed_Raid", 00:15:53.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.522 "strip_size_kb": 64, 00:15:53.522 "state": "configuring", 00:15:53.522 "raid_level": "raid5f", 00:15:53.522 "superblock": false, 00:15:53.522 "num_base_bdevs": 4, 00:15:53.522 "num_base_bdevs_discovered": 3, 00:15:53.522 "num_base_bdevs_operational": 4, 00:15:53.522 "base_bdevs_list": [ 00:15:53.522 { 00:15:53.522 "name": "BaseBdev1", 00:15:53.522 "uuid": "3b4f0faf-1716-41df-8f96-c3e1a3b0079f", 00:15:53.522 "is_configured": true, 00:15:53.522 "data_offset": 0, 00:15:53.522 "data_size": 65536 00:15:53.522 }, 00:15:53.522 { 00:15:53.522 "name": "BaseBdev2", 00:15:53.522 "uuid": "454c571f-8a7b-42d1-9ce8-7371a2534c92", 00:15:53.522 "is_configured": true, 00:15:53.522 "data_offset": 0, 00:15:53.522 "data_size": 65536 00:15:53.522 }, 00:15:53.522 { 00:15:53.522 "name": "BaseBdev3", 00:15:53.522 "uuid": "d59c9b1d-08ef-48a7-af5c-c66843df5350", 00:15:53.522 "is_configured": true, 00:15:53.522 "data_offset": 0, 00:15:53.522 "data_size": 65536 00:15:53.522 }, 00:15:53.522 { 00:15:53.522 "name": "BaseBdev4", 00:15:53.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.522 "is_configured": false, 00:15:53.522 "data_offset": 0, 00:15:53.522 "data_size": 0 00:15:53.522 } 00:15:53.522 ] 00:15:53.522 }' 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.522 11:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.091 [2024-11-27 11:54:20.236265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:54.091 [2024-11-27 11:54:20.236346] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:54.091 [2024-11-27 11:54:20.236364] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:54.091 [2024-11-27 11:54:20.236651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:54.091 [2024-11-27 11:54:20.244151] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:54.091 [2024-11-27 11:54:20.244179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:54.091 [2024-11-27 11:54:20.244475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.091 BaseBdev4 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.091 [ 00:15:54.091 { 00:15:54.091 "name": "BaseBdev4", 00:15:54.091 "aliases": [ 00:15:54.091 "7035e1da-8e37-447d-8c24-47372d24ec94" 00:15:54.091 ], 00:15:54.091 "product_name": "Malloc disk", 00:15:54.091 "block_size": 512, 00:15:54.091 "num_blocks": 65536, 00:15:54.091 "uuid": "7035e1da-8e37-447d-8c24-47372d24ec94", 00:15:54.091 "assigned_rate_limits": { 00:15:54.091 "rw_ios_per_sec": 0, 00:15:54.091 "rw_mbytes_per_sec": 0, 00:15:54.091 "r_mbytes_per_sec": 0, 00:15:54.091 "w_mbytes_per_sec": 0 00:15:54.091 }, 00:15:54.091 "claimed": true, 00:15:54.091 "claim_type": "exclusive_write", 00:15:54.091 "zoned": false, 00:15:54.091 "supported_io_types": { 00:15:54.091 "read": true, 00:15:54.091 "write": true, 00:15:54.091 "unmap": true, 00:15:54.091 "flush": true, 00:15:54.091 "reset": true, 00:15:54.091 "nvme_admin": false, 00:15:54.091 "nvme_io": false, 00:15:54.091 "nvme_io_md": false, 00:15:54.091 "write_zeroes": true, 00:15:54.091 "zcopy": true, 00:15:54.091 "get_zone_info": false, 00:15:54.091 "zone_management": false, 00:15:54.091 "zone_append": false, 00:15:54.091 "compare": false, 00:15:54.091 "compare_and_write": false, 00:15:54.091 "abort": true, 00:15:54.091 "seek_hole": false, 00:15:54.091 "seek_data": false, 00:15:54.091 "copy": true, 00:15:54.091 "nvme_iov_md": false 00:15:54.091 }, 00:15:54.091 "memory_domains": [ 00:15:54.091 { 00:15:54.091 "dma_device_id": "system", 00:15:54.091 "dma_device_type": 1 00:15:54.091 }, 00:15:54.091 { 00:15:54.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.091 "dma_device_type": 2 00:15:54.091 } 00:15:54.091 ], 00:15:54.091 "driver_specific": {} 00:15:54.091 } 00:15:54.091 ] 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.091 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.091 "name": "Existed_Raid", 00:15:54.091 "uuid": "95dfe722-a5db-4c83-8329-659905d09a60", 00:15:54.091 "strip_size_kb": 64, 00:15:54.091 "state": "online", 00:15:54.091 "raid_level": "raid5f", 00:15:54.091 "superblock": false, 00:15:54.091 "num_base_bdevs": 4, 00:15:54.091 "num_base_bdevs_discovered": 4, 00:15:54.091 "num_base_bdevs_operational": 4, 00:15:54.091 "base_bdevs_list": [ 00:15:54.091 { 00:15:54.091 "name": "BaseBdev1", 00:15:54.091 "uuid": "3b4f0faf-1716-41df-8f96-c3e1a3b0079f", 00:15:54.091 "is_configured": true, 00:15:54.091 "data_offset": 0, 00:15:54.091 "data_size": 65536 00:15:54.091 }, 00:15:54.091 { 00:15:54.091 "name": "BaseBdev2", 00:15:54.091 "uuid": "454c571f-8a7b-42d1-9ce8-7371a2534c92", 00:15:54.091 "is_configured": true, 00:15:54.091 "data_offset": 0, 00:15:54.091 "data_size": 65536 00:15:54.091 }, 00:15:54.091 { 00:15:54.091 "name": "BaseBdev3", 00:15:54.092 "uuid": "d59c9b1d-08ef-48a7-af5c-c66843df5350", 00:15:54.092 "is_configured": true, 00:15:54.092 "data_offset": 0, 00:15:54.092 "data_size": 65536 00:15:54.092 }, 00:15:54.092 { 00:15:54.092 "name": "BaseBdev4", 00:15:54.092 "uuid": "7035e1da-8e37-447d-8c24-47372d24ec94", 00:15:54.092 "is_configured": true, 00:15:54.092 "data_offset": 0, 00:15:54.092 "data_size": 65536 00:15:54.092 } 00:15:54.092 ] 00:15:54.092 }' 00:15:54.092 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.092 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.660 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:54.660 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:54.660 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:54.660 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:54.660 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:54.660 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:54.660 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:54.660 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:54.660 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.660 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.660 [2024-11-27 11:54:20.757087] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.660 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.660 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:54.660 "name": "Existed_Raid", 00:15:54.660 "aliases": [ 00:15:54.660 "95dfe722-a5db-4c83-8329-659905d09a60" 00:15:54.660 ], 00:15:54.660 "product_name": "Raid Volume", 00:15:54.660 "block_size": 512, 00:15:54.660 "num_blocks": 196608, 00:15:54.660 "uuid": "95dfe722-a5db-4c83-8329-659905d09a60", 00:15:54.660 "assigned_rate_limits": { 00:15:54.660 "rw_ios_per_sec": 0, 00:15:54.660 "rw_mbytes_per_sec": 0, 00:15:54.660 "r_mbytes_per_sec": 0, 00:15:54.660 "w_mbytes_per_sec": 0 00:15:54.660 }, 00:15:54.660 "claimed": false, 00:15:54.660 "zoned": false, 00:15:54.660 "supported_io_types": { 00:15:54.660 "read": true, 00:15:54.660 "write": true, 00:15:54.660 "unmap": false, 00:15:54.660 "flush": false, 00:15:54.660 "reset": true, 00:15:54.660 "nvme_admin": false, 00:15:54.660 "nvme_io": false, 00:15:54.660 "nvme_io_md": false, 00:15:54.660 "write_zeroes": true, 00:15:54.660 "zcopy": false, 00:15:54.660 "get_zone_info": false, 00:15:54.660 "zone_management": false, 00:15:54.660 "zone_append": false, 00:15:54.660 "compare": false, 00:15:54.660 "compare_and_write": false, 00:15:54.660 "abort": false, 00:15:54.660 "seek_hole": false, 00:15:54.660 "seek_data": false, 00:15:54.660 "copy": false, 00:15:54.660 "nvme_iov_md": false 00:15:54.660 }, 00:15:54.660 "driver_specific": { 00:15:54.660 "raid": { 00:15:54.660 "uuid": "95dfe722-a5db-4c83-8329-659905d09a60", 00:15:54.660 "strip_size_kb": 64, 00:15:54.660 "state": "online", 00:15:54.660 "raid_level": "raid5f", 00:15:54.660 "superblock": false, 00:15:54.660 "num_base_bdevs": 4, 00:15:54.660 "num_base_bdevs_discovered": 4, 00:15:54.660 "num_base_bdevs_operational": 4, 00:15:54.660 "base_bdevs_list": [ 00:15:54.660 { 00:15:54.660 "name": "BaseBdev1", 00:15:54.660 "uuid": "3b4f0faf-1716-41df-8f96-c3e1a3b0079f", 00:15:54.660 "is_configured": true, 00:15:54.660 "data_offset": 0, 00:15:54.660 "data_size": 65536 00:15:54.660 }, 00:15:54.660 { 00:15:54.661 "name": "BaseBdev2", 00:15:54.661 "uuid": "454c571f-8a7b-42d1-9ce8-7371a2534c92", 00:15:54.661 "is_configured": true, 00:15:54.661 "data_offset": 0, 00:15:54.661 "data_size": 65536 00:15:54.661 }, 00:15:54.661 { 00:15:54.661 "name": "BaseBdev3", 00:15:54.661 "uuid": "d59c9b1d-08ef-48a7-af5c-c66843df5350", 00:15:54.661 "is_configured": true, 00:15:54.661 "data_offset": 0, 00:15:54.661 "data_size": 65536 00:15:54.661 }, 00:15:54.661 { 00:15:54.661 "name": "BaseBdev4", 00:15:54.661 "uuid": "7035e1da-8e37-447d-8c24-47372d24ec94", 00:15:54.661 "is_configured": true, 00:15:54.661 "data_offset": 0, 00:15:54.661 "data_size": 65536 00:15:54.661 } 00:15:54.661 ] 00:15:54.661 } 00:15:54.661 } 00:15:54.661 }' 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:54.661 BaseBdev2 00:15:54.661 BaseBdev3 00:15:54.661 BaseBdev4' 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.661 11:54:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.921 [2024-11-27 11:54:21.112288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.921 "name": "Existed_Raid", 00:15:54.921 "uuid": "95dfe722-a5db-4c83-8329-659905d09a60", 00:15:54.921 "strip_size_kb": 64, 00:15:54.921 "state": "online", 00:15:54.921 "raid_level": "raid5f", 00:15:54.921 "superblock": false, 00:15:54.921 "num_base_bdevs": 4, 00:15:54.921 "num_base_bdevs_discovered": 3, 00:15:54.921 "num_base_bdevs_operational": 3, 00:15:54.921 "base_bdevs_list": [ 00:15:54.921 { 00:15:54.921 "name": null, 00:15:54.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.921 "is_configured": false, 00:15:54.921 "data_offset": 0, 00:15:54.921 "data_size": 65536 00:15:54.921 }, 00:15:54.921 { 00:15:54.921 "name": "BaseBdev2", 00:15:54.921 "uuid": "454c571f-8a7b-42d1-9ce8-7371a2534c92", 00:15:54.921 "is_configured": true, 00:15:54.921 "data_offset": 0, 00:15:54.921 "data_size": 65536 00:15:54.921 }, 00:15:54.921 { 00:15:54.921 "name": "BaseBdev3", 00:15:54.921 "uuid": "d59c9b1d-08ef-48a7-af5c-c66843df5350", 00:15:54.921 "is_configured": true, 00:15:54.921 "data_offset": 0, 00:15:54.921 "data_size": 65536 00:15:54.921 }, 00:15:54.921 { 00:15:54.921 "name": "BaseBdev4", 00:15:54.921 "uuid": "7035e1da-8e37-447d-8c24-47372d24ec94", 00:15:54.921 "is_configured": true, 00:15:54.921 "data_offset": 0, 00:15:54.921 "data_size": 65536 00:15:54.921 } 00:15:54.921 ] 00:15:54.921 }' 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.921 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.489 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:55.489 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:55.489 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.489 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.489 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:55.489 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.489 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.489 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:55.489 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:55.489 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:55.489 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.489 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.489 [2024-11-27 11:54:21.780092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:55.489 [2024-11-27 11:54:21.780262] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.748 [2024-11-27 11:54:21.897590] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.748 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.748 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:55.748 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:55.748 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.748 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.748 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:55.748 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.748 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.748 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:55.748 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:55.748 11:54:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:55.748 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.748 11:54:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.748 [2024-11-27 11:54:21.953561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:55.748 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.748 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:55.748 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:55.748 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:55.748 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.748 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.748 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.748 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.748 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:55.748 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:55.748 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:55.748 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.748 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.748 [2024-11-27 11:54:22.130420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:55.748 [2024-11-27 11:54:22.130543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.008 BaseBdev2 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:56.008 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:56.009 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:56.009 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:56.009 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:56.009 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.009 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.009 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.009 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:56.009 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.009 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.009 [ 00:15:56.009 { 00:15:56.009 "name": "BaseBdev2", 00:15:56.009 "aliases": [ 00:15:56.009 "dcd17734-4a15-47b5-8ed5-0746618ffb1c" 00:15:56.009 ], 00:15:56.009 "product_name": "Malloc disk", 00:15:56.009 "block_size": 512, 00:15:56.009 "num_blocks": 65536, 00:15:56.009 "uuid": "dcd17734-4a15-47b5-8ed5-0746618ffb1c", 00:15:56.009 "assigned_rate_limits": { 00:15:56.009 "rw_ios_per_sec": 0, 00:15:56.009 "rw_mbytes_per_sec": 0, 00:15:56.009 "r_mbytes_per_sec": 0, 00:15:56.009 "w_mbytes_per_sec": 0 00:15:56.009 }, 00:15:56.009 "claimed": false, 00:15:56.009 "zoned": false, 00:15:56.009 "supported_io_types": { 00:15:56.009 "read": true, 00:15:56.009 "write": true, 00:15:56.009 "unmap": true, 00:15:56.009 "flush": true, 00:15:56.009 "reset": true, 00:15:56.009 "nvme_admin": false, 00:15:56.009 "nvme_io": false, 00:15:56.009 "nvme_io_md": false, 00:15:56.009 "write_zeroes": true, 00:15:56.009 "zcopy": true, 00:15:56.009 "get_zone_info": false, 00:15:56.009 "zone_management": false, 00:15:56.009 "zone_append": false, 00:15:56.009 "compare": false, 00:15:56.009 "compare_and_write": false, 00:15:56.009 "abort": true, 00:15:56.009 "seek_hole": false, 00:15:56.009 "seek_data": false, 00:15:56.009 "copy": true, 00:15:56.009 "nvme_iov_md": false 00:15:56.009 }, 00:15:56.009 "memory_domains": [ 00:15:56.009 { 00:15:56.009 "dma_device_id": "system", 00:15:56.009 "dma_device_type": 1 00:15:56.009 }, 00:15:56.009 { 00:15:56.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.009 "dma_device_type": 2 00:15:56.009 } 00:15:56.009 ], 00:15:56.009 "driver_specific": {} 00:15:56.009 } 00:15:56.009 ] 00:15:56.009 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.009 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:56.009 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:56.009 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:56.009 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:56.009 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.009 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.269 BaseBdev3 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.269 [ 00:15:56.269 { 00:15:56.269 "name": "BaseBdev3", 00:15:56.269 "aliases": [ 00:15:56.269 "19a3d0e7-e49d-48a7-af1d-7a94838ae4b8" 00:15:56.269 ], 00:15:56.269 "product_name": "Malloc disk", 00:15:56.269 "block_size": 512, 00:15:56.269 "num_blocks": 65536, 00:15:56.269 "uuid": "19a3d0e7-e49d-48a7-af1d-7a94838ae4b8", 00:15:56.269 "assigned_rate_limits": { 00:15:56.269 "rw_ios_per_sec": 0, 00:15:56.269 "rw_mbytes_per_sec": 0, 00:15:56.269 "r_mbytes_per_sec": 0, 00:15:56.269 "w_mbytes_per_sec": 0 00:15:56.269 }, 00:15:56.269 "claimed": false, 00:15:56.269 "zoned": false, 00:15:56.269 "supported_io_types": { 00:15:56.269 "read": true, 00:15:56.269 "write": true, 00:15:56.269 "unmap": true, 00:15:56.269 "flush": true, 00:15:56.269 "reset": true, 00:15:56.269 "nvme_admin": false, 00:15:56.269 "nvme_io": false, 00:15:56.269 "nvme_io_md": false, 00:15:56.269 "write_zeroes": true, 00:15:56.269 "zcopy": true, 00:15:56.269 "get_zone_info": false, 00:15:56.269 "zone_management": false, 00:15:56.269 "zone_append": false, 00:15:56.269 "compare": false, 00:15:56.269 "compare_and_write": false, 00:15:56.269 "abort": true, 00:15:56.269 "seek_hole": false, 00:15:56.269 "seek_data": false, 00:15:56.269 "copy": true, 00:15:56.269 "nvme_iov_md": false 00:15:56.269 }, 00:15:56.269 "memory_domains": [ 00:15:56.269 { 00:15:56.269 "dma_device_id": "system", 00:15:56.269 "dma_device_type": 1 00:15:56.269 }, 00:15:56.269 { 00:15:56.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.269 "dma_device_type": 2 00:15:56.269 } 00:15:56.269 ], 00:15:56.269 "driver_specific": {} 00:15:56.269 } 00:15:56.269 ] 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.269 BaseBdev4 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.269 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.269 [ 00:15:56.269 { 00:15:56.269 "name": "BaseBdev4", 00:15:56.269 "aliases": [ 00:15:56.269 "b9440c6a-b385-40a2-b1bb-aa992a1c99e3" 00:15:56.269 ], 00:15:56.269 "product_name": "Malloc disk", 00:15:56.269 "block_size": 512, 00:15:56.269 "num_blocks": 65536, 00:15:56.269 "uuid": "b9440c6a-b385-40a2-b1bb-aa992a1c99e3", 00:15:56.269 "assigned_rate_limits": { 00:15:56.269 "rw_ios_per_sec": 0, 00:15:56.269 "rw_mbytes_per_sec": 0, 00:15:56.269 "r_mbytes_per_sec": 0, 00:15:56.269 "w_mbytes_per_sec": 0 00:15:56.269 }, 00:15:56.269 "claimed": false, 00:15:56.269 "zoned": false, 00:15:56.269 "supported_io_types": { 00:15:56.269 "read": true, 00:15:56.269 "write": true, 00:15:56.269 "unmap": true, 00:15:56.269 "flush": true, 00:15:56.269 "reset": true, 00:15:56.269 "nvme_admin": false, 00:15:56.269 "nvme_io": false, 00:15:56.269 "nvme_io_md": false, 00:15:56.269 "write_zeroes": true, 00:15:56.269 "zcopy": true, 00:15:56.269 "get_zone_info": false, 00:15:56.269 "zone_management": false, 00:15:56.269 "zone_append": false, 00:15:56.269 "compare": false, 00:15:56.269 "compare_and_write": false, 00:15:56.269 "abort": true, 00:15:56.269 "seek_hole": false, 00:15:56.269 "seek_data": false, 00:15:56.269 "copy": true, 00:15:56.269 "nvme_iov_md": false 00:15:56.269 }, 00:15:56.269 "memory_domains": [ 00:15:56.269 { 00:15:56.269 "dma_device_id": "system", 00:15:56.269 "dma_device_type": 1 00:15:56.269 }, 00:15:56.269 { 00:15:56.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.269 "dma_device_type": 2 00:15:56.269 } 00:15:56.269 ], 00:15:56.269 "driver_specific": {} 00:15:56.270 } 00:15:56.270 ] 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.270 [2024-11-27 11:54:22.574000] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:56.270 [2024-11-27 11:54:22.574052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:56.270 [2024-11-27 11:54:22.574080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.270 [2024-11-27 11:54:22.576207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:56.270 [2024-11-27 11:54:22.576269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.270 "name": "Existed_Raid", 00:15:56.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.270 "strip_size_kb": 64, 00:15:56.270 "state": "configuring", 00:15:56.270 "raid_level": "raid5f", 00:15:56.270 "superblock": false, 00:15:56.270 "num_base_bdevs": 4, 00:15:56.270 "num_base_bdevs_discovered": 3, 00:15:56.270 "num_base_bdevs_operational": 4, 00:15:56.270 "base_bdevs_list": [ 00:15:56.270 { 00:15:56.270 "name": "BaseBdev1", 00:15:56.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.270 "is_configured": false, 00:15:56.270 "data_offset": 0, 00:15:56.270 "data_size": 0 00:15:56.270 }, 00:15:56.270 { 00:15:56.270 "name": "BaseBdev2", 00:15:56.270 "uuid": "dcd17734-4a15-47b5-8ed5-0746618ffb1c", 00:15:56.270 "is_configured": true, 00:15:56.270 "data_offset": 0, 00:15:56.270 "data_size": 65536 00:15:56.270 }, 00:15:56.270 { 00:15:56.270 "name": "BaseBdev3", 00:15:56.270 "uuid": "19a3d0e7-e49d-48a7-af1d-7a94838ae4b8", 00:15:56.270 "is_configured": true, 00:15:56.270 "data_offset": 0, 00:15:56.270 "data_size": 65536 00:15:56.270 }, 00:15:56.270 { 00:15:56.270 "name": "BaseBdev4", 00:15:56.270 "uuid": "b9440c6a-b385-40a2-b1bb-aa992a1c99e3", 00:15:56.270 "is_configured": true, 00:15:56.270 "data_offset": 0, 00:15:56.270 "data_size": 65536 00:15:56.270 } 00:15:56.270 ] 00:15:56.270 }' 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.270 11:54:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.838 [2024-11-27 11:54:23.033252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.838 "name": "Existed_Raid", 00:15:56.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.838 "strip_size_kb": 64, 00:15:56.838 "state": "configuring", 00:15:56.838 "raid_level": "raid5f", 00:15:56.838 "superblock": false, 00:15:56.838 "num_base_bdevs": 4, 00:15:56.838 "num_base_bdevs_discovered": 2, 00:15:56.838 "num_base_bdevs_operational": 4, 00:15:56.838 "base_bdevs_list": [ 00:15:56.838 { 00:15:56.838 "name": "BaseBdev1", 00:15:56.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.838 "is_configured": false, 00:15:56.838 "data_offset": 0, 00:15:56.838 "data_size": 0 00:15:56.838 }, 00:15:56.838 { 00:15:56.838 "name": null, 00:15:56.838 "uuid": "dcd17734-4a15-47b5-8ed5-0746618ffb1c", 00:15:56.838 "is_configured": false, 00:15:56.838 "data_offset": 0, 00:15:56.838 "data_size": 65536 00:15:56.838 }, 00:15:56.838 { 00:15:56.838 "name": "BaseBdev3", 00:15:56.838 "uuid": "19a3d0e7-e49d-48a7-af1d-7a94838ae4b8", 00:15:56.838 "is_configured": true, 00:15:56.838 "data_offset": 0, 00:15:56.838 "data_size": 65536 00:15:56.838 }, 00:15:56.838 { 00:15:56.838 "name": "BaseBdev4", 00:15:56.838 "uuid": "b9440c6a-b385-40a2-b1bb-aa992a1c99e3", 00:15:56.838 "is_configured": true, 00:15:56.838 "data_offset": 0, 00:15:56.838 "data_size": 65536 00:15:56.838 } 00:15:56.838 ] 00:15:56.838 }' 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.838 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.406 [2024-11-27 11:54:23.635227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:57.406 BaseBdev1 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.406 [ 00:15:57.406 { 00:15:57.406 "name": "BaseBdev1", 00:15:57.406 "aliases": [ 00:15:57.406 "5ebc52fc-63f2-4476-9032-de6b6b435c57" 00:15:57.406 ], 00:15:57.406 "product_name": "Malloc disk", 00:15:57.406 "block_size": 512, 00:15:57.406 "num_blocks": 65536, 00:15:57.406 "uuid": "5ebc52fc-63f2-4476-9032-de6b6b435c57", 00:15:57.406 "assigned_rate_limits": { 00:15:57.406 "rw_ios_per_sec": 0, 00:15:57.406 "rw_mbytes_per_sec": 0, 00:15:57.406 "r_mbytes_per_sec": 0, 00:15:57.406 "w_mbytes_per_sec": 0 00:15:57.406 }, 00:15:57.406 "claimed": true, 00:15:57.406 "claim_type": "exclusive_write", 00:15:57.406 "zoned": false, 00:15:57.406 "supported_io_types": { 00:15:57.406 "read": true, 00:15:57.406 "write": true, 00:15:57.406 "unmap": true, 00:15:57.406 "flush": true, 00:15:57.406 "reset": true, 00:15:57.406 "nvme_admin": false, 00:15:57.406 "nvme_io": false, 00:15:57.406 "nvme_io_md": false, 00:15:57.406 "write_zeroes": true, 00:15:57.406 "zcopy": true, 00:15:57.406 "get_zone_info": false, 00:15:57.406 "zone_management": false, 00:15:57.406 "zone_append": false, 00:15:57.406 "compare": false, 00:15:57.406 "compare_and_write": false, 00:15:57.406 "abort": true, 00:15:57.406 "seek_hole": false, 00:15:57.406 "seek_data": false, 00:15:57.406 "copy": true, 00:15:57.406 "nvme_iov_md": false 00:15:57.406 }, 00:15:57.406 "memory_domains": [ 00:15:57.406 { 00:15:57.406 "dma_device_id": "system", 00:15:57.406 "dma_device_type": 1 00:15:57.406 }, 00:15:57.406 { 00:15:57.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.406 "dma_device_type": 2 00:15:57.406 } 00:15:57.406 ], 00:15:57.406 "driver_specific": {} 00:15:57.406 } 00:15:57.406 ] 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.406 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.406 "name": "Existed_Raid", 00:15:57.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.406 "strip_size_kb": 64, 00:15:57.406 "state": "configuring", 00:15:57.406 "raid_level": "raid5f", 00:15:57.406 "superblock": false, 00:15:57.406 "num_base_bdevs": 4, 00:15:57.406 "num_base_bdevs_discovered": 3, 00:15:57.406 "num_base_bdevs_operational": 4, 00:15:57.406 "base_bdevs_list": [ 00:15:57.406 { 00:15:57.406 "name": "BaseBdev1", 00:15:57.406 "uuid": "5ebc52fc-63f2-4476-9032-de6b6b435c57", 00:15:57.406 "is_configured": true, 00:15:57.406 "data_offset": 0, 00:15:57.406 "data_size": 65536 00:15:57.406 }, 00:15:57.406 { 00:15:57.406 "name": null, 00:15:57.406 "uuid": "dcd17734-4a15-47b5-8ed5-0746618ffb1c", 00:15:57.406 "is_configured": false, 00:15:57.406 "data_offset": 0, 00:15:57.406 "data_size": 65536 00:15:57.406 }, 00:15:57.406 { 00:15:57.406 "name": "BaseBdev3", 00:15:57.406 "uuid": "19a3d0e7-e49d-48a7-af1d-7a94838ae4b8", 00:15:57.406 "is_configured": true, 00:15:57.406 "data_offset": 0, 00:15:57.406 "data_size": 65536 00:15:57.406 }, 00:15:57.406 { 00:15:57.406 "name": "BaseBdev4", 00:15:57.406 "uuid": "b9440c6a-b385-40a2-b1bb-aa992a1c99e3", 00:15:57.406 "is_configured": true, 00:15:57.407 "data_offset": 0, 00:15:57.407 "data_size": 65536 00:15:57.407 } 00:15:57.407 ] 00:15:57.407 }' 00:15:57.407 11:54:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.407 11:54:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.976 [2024-11-27 11:54:24.206402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.976 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.976 "name": "Existed_Raid", 00:15:57.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.976 "strip_size_kb": 64, 00:15:57.976 "state": "configuring", 00:15:57.976 "raid_level": "raid5f", 00:15:57.976 "superblock": false, 00:15:57.976 "num_base_bdevs": 4, 00:15:57.976 "num_base_bdevs_discovered": 2, 00:15:57.976 "num_base_bdevs_operational": 4, 00:15:57.976 "base_bdevs_list": [ 00:15:57.976 { 00:15:57.976 "name": "BaseBdev1", 00:15:57.976 "uuid": "5ebc52fc-63f2-4476-9032-de6b6b435c57", 00:15:57.976 "is_configured": true, 00:15:57.977 "data_offset": 0, 00:15:57.977 "data_size": 65536 00:15:57.977 }, 00:15:57.977 { 00:15:57.977 "name": null, 00:15:57.977 "uuid": "dcd17734-4a15-47b5-8ed5-0746618ffb1c", 00:15:57.977 "is_configured": false, 00:15:57.977 "data_offset": 0, 00:15:57.977 "data_size": 65536 00:15:57.977 }, 00:15:57.977 { 00:15:57.977 "name": null, 00:15:57.977 "uuid": "19a3d0e7-e49d-48a7-af1d-7a94838ae4b8", 00:15:57.977 "is_configured": false, 00:15:57.977 "data_offset": 0, 00:15:57.977 "data_size": 65536 00:15:57.977 }, 00:15:57.977 { 00:15:57.977 "name": "BaseBdev4", 00:15:57.977 "uuid": "b9440c6a-b385-40a2-b1bb-aa992a1c99e3", 00:15:57.977 "is_configured": true, 00:15:57.977 "data_offset": 0, 00:15:57.977 "data_size": 65536 00:15:57.977 } 00:15:57.977 ] 00:15:57.977 }' 00:15:57.977 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.977 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.545 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:58.545 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.545 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.545 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.545 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.545 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.546 [2024-11-27 11:54:24.705751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.546 "name": "Existed_Raid", 00:15:58.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.546 "strip_size_kb": 64, 00:15:58.546 "state": "configuring", 00:15:58.546 "raid_level": "raid5f", 00:15:58.546 "superblock": false, 00:15:58.546 "num_base_bdevs": 4, 00:15:58.546 "num_base_bdevs_discovered": 3, 00:15:58.546 "num_base_bdevs_operational": 4, 00:15:58.546 "base_bdevs_list": [ 00:15:58.546 { 00:15:58.546 "name": "BaseBdev1", 00:15:58.546 "uuid": "5ebc52fc-63f2-4476-9032-de6b6b435c57", 00:15:58.546 "is_configured": true, 00:15:58.546 "data_offset": 0, 00:15:58.546 "data_size": 65536 00:15:58.546 }, 00:15:58.546 { 00:15:58.546 "name": null, 00:15:58.546 "uuid": "dcd17734-4a15-47b5-8ed5-0746618ffb1c", 00:15:58.546 "is_configured": false, 00:15:58.546 "data_offset": 0, 00:15:58.546 "data_size": 65536 00:15:58.546 }, 00:15:58.546 { 00:15:58.546 "name": "BaseBdev3", 00:15:58.546 "uuid": "19a3d0e7-e49d-48a7-af1d-7a94838ae4b8", 00:15:58.546 "is_configured": true, 00:15:58.546 "data_offset": 0, 00:15:58.546 "data_size": 65536 00:15:58.546 }, 00:15:58.546 { 00:15:58.546 "name": "BaseBdev4", 00:15:58.546 "uuid": "b9440c6a-b385-40a2-b1bb-aa992a1c99e3", 00:15:58.546 "is_configured": true, 00:15:58.546 "data_offset": 0, 00:15:58.546 "data_size": 65536 00:15:58.546 } 00:15:58.546 ] 00:15:58.546 }' 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.546 11:54:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.804 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.804 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.804 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:58.804 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.804 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.804 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:58.804 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:58.804 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.804 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.063 [2024-11-27 11:54:25.193042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:59.063 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.063 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:59.063 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.063 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.063 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.063 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.063 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.063 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.063 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.063 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.063 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.063 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.063 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.063 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.063 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.063 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.063 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.063 "name": "Existed_Raid", 00:15:59.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.063 "strip_size_kb": 64, 00:15:59.063 "state": "configuring", 00:15:59.063 "raid_level": "raid5f", 00:15:59.063 "superblock": false, 00:15:59.063 "num_base_bdevs": 4, 00:15:59.063 "num_base_bdevs_discovered": 2, 00:15:59.063 "num_base_bdevs_operational": 4, 00:15:59.063 "base_bdevs_list": [ 00:15:59.063 { 00:15:59.063 "name": null, 00:15:59.063 "uuid": "5ebc52fc-63f2-4476-9032-de6b6b435c57", 00:15:59.063 "is_configured": false, 00:15:59.063 "data_offset": 0, 00:15:59.063 "data_size": 65536 00:15:59.063 }, 00:15:59.063 { 00:15:59.063 "name": null, 00:15:59.063 "uuid": "dcd17734-4a15-47b5-8ed5-0746618ffb1c", 00:15:59.063 "is_configured": false, 00:15:59.063 "data_offset": 0, 00:15:59.063 "data_size": 65536 00:15:59.063 }, 00:15:59.063 { 00:15:59.063 "name": "BaseBdev3", 00:15:59.063 "uuid": "19a3d0e7-e49d-48a7-af1d-7a94838ae4b8", 00:15:59.063 "is_configured": true, 00:15:59.063 "data_offset": 0, 00:15:59.063 "data_size": 65536 00:15:59.064 }, 00:15:59.064 { 00:15:59.064 "name": "BaseBdev4", 00:15:59.064 "uuid": "b9440c6a-b385-40a2-b1bb-aa992a1c99e3", 00:15:59.064 "is_configured": true, 00:15:59.064 "data_offset": 0, 00:15:59.064 "data_size": 65536 00:15:59.064 } 00:15:59.064 ] 00:15:59.064 }' 00:15:59.064 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.064 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.631 [2024-11-27 11:54:25.814128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.631 "name": "Existed_Raid", 00:15:59.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.631 "strip_size_kb": 64, 00:15:59.631 "state": "configuring", 00:15:59.631 "raid_level": "raid5f", 00:15:59.631 "superblock": false, 00:15:59.631 "num_base_bdevs": 4, 00:15:59.631 "num_base_bdevs_discovered": 3, 00:15:59.631 "num_base_bdevs_operational": 4, 00:15:59.631 "base_bdevs_list": [ 00:15:59.631 { 00:15:59.631 "name": null, 00:15:59.631 "uuid": "5ebc52fc-63f2-4476-9032-de6b6b435c57", 00:15:59.631 "is_configured": false, 00:15:59.631 "data_offset": 0, 00:15:59.631 "data_size": 65536 00:15:59.631 }, 00:15:59.631 { 00:15:59.631 "name": "BaseBdev2", 00:15:59.631 "uuid": "dcd17734-4a15-47b5-8ed5-0746618ffb1c", 00:15:59.631 "is_configured": true, 00:15:59.631 "data_offset": 0, 00:15:59.631 "data_size": 65536 00:15:59.631 }, 00:15:59.631 { 00:15:59.631 "name": "BaseBdev3", 00:15:59.631 "uuid": "19a3d0e7-e49d-48a7-af1d-7a94838ae4b8", 00:15:59.631 "is_configured": true, 00:15:59.631 "data_offset": 0, 00:15:59.631 "data_size": 65536 00:15:59.631 }, 00:15:59.631 { 00:15:59.631 "name": "BaseBdev4", 00:15:59.631 "uuid": "b9440c6a-b385-40a2-b1bb-aa992a1c99e3", 00:15:59.631 "is_configured": true, 00:15:59.631 "data_offset": 0, 00:15:59.631 "data_size": 65536 00:15:59.631 } 00:15:59.631 ] 00:15:59.631 }' 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.631 11:54:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.199 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.199 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:00.199 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.199 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.199 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.199 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:00.199 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:00.199 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.199 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.199 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.199 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5ebc52fc-63f2-4476-9032-de6b6b435c57 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.200 [2024-11-27 11:54:26.440062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:00.200 [2024-11-27 11:54:26.440204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:00.200 [2024-11-27 11:54:26.440247] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:00.200 [2024-11-27 11:54:26.440572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:00.200 [2024-11-27 11:54:26.448920] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:00.200 [2024-11-27 11:54:26.448988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:00.200 [2024-11-27 11:54:26.449380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.200 NewBaseBdev 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.200 [ 00:16:00.200 { 00:16:00.200 "name": "NewBaseBdev", 00:16:00.200 "aliases": [ 00:16:00.200 "5ebc52fc-63f2-4476-9032-de6b6b435c57" 00:16:00.200 ], 00:16:00.200 "product_name": "Malloc disk", 00:16:00.200 "block_size": 512, 00:16:00.200 "num_blocks": 65536, 00:16:00.200 "uuid": "5ebc52fc-63f2-4476-9032-de6b6b435c57", 00:16:00.200 "assigned_rate_limits": { 00:16:00.200 "rw_ios_per_sec": 0, 00:16:00.200 "rw_mbytes_per_sec": 0, 00:16:00.200 "r_mbytes_per_sec": 0, 00:16:00.200 "w_mbytes_per_sec": 0 00:16:00.200 }, 00:16:00.200 "claimed": true, 00:16:00.200 "claim_type": "exclusive_write", 00:16:00.200 "zoned": false, 00:16:00.200 "supported_io_types": { 00:16:00.200 "read": true, 00:16:00.200 "write": true, 00:16:00.200 "unmap": true, 00:16:00.200 "flush": true, 00:16:00.200 "reset": true, 00:16:00.200 "nvme_admin": false, 00:16:00.200 "nvme_io": false, 00:16:00.200 "nvme_io_md": false, 00:16:00.200 "write_zeroes": true, 00:16:00.200 "zcopy": true, 00:16:00.200 "get_zone_info": false, 00:16:00.200 "zone_management": false, 00:16:00.200 "zone_append": false, 00:16:00.200 "compare": false, 00:16:00.200 "compare_and_write": false, 00:16:00.200 "abort": true, 00:16:00.200 "seek_hole": false, 00:16:00.200 "seek_data": false, 00:16:00.200 "copy": true, 00:16:00.200 "nvme_iov_md": false 00:16:00.200 }, 00:16:00.200 "memory_domains": [ 00:16:00.200 { 00:16:00.200 "dma_device_id": "system", 00:16:00.200 "dma_device_type": 1 00:16:00.200 }, 00:16:00.200 { 00:16:00.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.200 "dma_device_type": 2 00:16:00.200 } 00:16:00.200 ], 00:16:00.200 "driver_specific": {} 00:16:00.200 } 00:16:00.200 ] 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.200 "name": "Existed_Raid", 00:16:00.200 "uuid": "206e1b9c-4568-49bb-81fe-5ed66a90f0c0", 00:16:00.200 "strip_size_kb": 64, 00:16:00.200 "state": "online", 00:16:00.200 "raid_level": "raid5f", 00:16:00.200 "superblock": false, 00:16:00.200 "num_base_bdevs": 4, 00:16:00.200 "num_base_bdevs_discovered": 4, 00:16:00.200 "num_base_bdevs_operational": 4, 00:16:00.200 "base_bdevs_list": [ 00:16:00.200 { 00:16:00.200 "name": "NewBaseBdev", 00:16:00.200 "uuid": "5ebc52fc-63f2-4476-9032-de6b6b435c57", 00:16:00.200 "is_configured": true, 00:16:00.200 "data_offset": 0, 00:16:00.200 "data_size": 65536 00:16:00.200 }, 00:16:00.200 { 00:16:00.200 "name": "BaseBdev2", 00:16:00.200 "uuid": "dcd17734-4a15-47b5-8ed5-0746618ffb1c", 00:16:00.200 "is_configured": true, 00:16:00.200 "data_offset": 0, 00:16:00.200 "data_size": 65536 00:16:00.200 }, 00:16:00.200 { 00:16:00.200 "name": "BaseBdev3", 00:16:00.200 "uuid": "19a3d0e7-e49d-48a7-af1d-7a94838ae4b8", 00:16:00.200 "is_configured": true, 00:16:00.200 "data_offset": 0, 00:16:00.200 "data_size": 65536 00:16:00.200 }, 00:16:00.200 { 00:16:00.200 "name": "BaseBdev4", 00:16:00.200 "uuid": "b9440c6a-b385-40a2-b1bb-aa992a1c99e3", 00:16:00.200 "is_configured": true, 00:16:00.200 "data_offset": 0, 00:16:00.200 "data_size": 65536 00:16:00.200 } 00:16:00.200 ] 00:16:00.200 }' 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.200 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.769 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:00.769 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:00.769 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:00.769 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:00.769 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:00.769 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:00.769 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:00.769 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:00.769 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.769 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.769 [2024-11-27 11:54:26.931082] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.769 11:54:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.769 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:00.769 "name": "Existed_Raid", 00:16:00.769 "aliases": [ 00:16:00.769 "206e1b9c-4568-49bb-81fe-5ed66a90f0c0" 00:16:00.769 ], 00:16:00.769 "product_name": "Raid Volume", 00:16:00.769 "block_size": 512, 00:16:00.769 "num_blocks": 196608, 00:16:00.769 "uuid": "206e1b9c-4568-49bb-81fe-5ed66a90f0c0", 00:16:00.769 "assigned_rate_limits": { 00:16:00.769 "rw_ios_per_sec": 0, 00:16:00.769 "rw_mbytes_per_sec": 0, 00:16:00.769 "r_mbytes_per_sec": 0, 00:16:00.769 "w_mbytes_per_sec": 0 00:16:00.769 }, 00:16:00.769 "claimed": false, 00:16:00.769 "zoned": false, 00:16:00.769 "supported_io_types": { 00:16:00.769 "read": true, 00:16:00.769 "write": true, 00:16:00.769 "unmap": false, 00:16:00.769 "flush": false, 00:16:00.769 "reset": true, 00:16:00.769 "nvme_admin": false, 00:16:00.769 "nvme_io": false, 00:16:00.769 "nvme_io_md": false, 00:16:00.769 "write_zeroes": true, 00:16:00.769 "zcopy": false, 00:16:00.769 "get_zone_info": false, 00:16:00.769 "zone_management": false, 00:16:00.769 "zone_append": false, 00:16:00.769 "compare": false, 00:16:00.769 "compare_and_write": false, 00:16:00.769 "abort": false, 00:16:00.769 "seek_hole": false, 00:16:00.769 "seek_data": false, 00:16:00.769 "copy": false, 00:16:00.769 "nvme_iov_md": false 00:16:00.769 }, 00:16:00.769 "driver_specific": { 00:16:00.769 "raid": { 00:16:00.769 "uuid": "206e1b9c-4568-49bb-81fe-5ed66a90f0c0", 00:16:00.769 "strip_size_kb": 64, 00:16:00.769 "state": "online", 00:16:00.769 "raid_level": "raid5f", 00:16:00.769 "superblock": false, 00:16:00.769 "num_base_bdevs": 4, 00:16:00.769 "num_base_bdevs_discovered": 4, 00:16:00.769 "num_base_bdevs_operational": 4, 00:16:00.769 "base_bdevs_list": [ 00:16:00.769 { 00:16:00.769 "name": "NewBaseBdev", 00:16:00.769 "uuid": "5ebc52fc-63f2-4476-9032-de6b6b435c57", 00:16:00.769 "is_configured": true, 00:16:00.769 "data_offset": 0, 00:16:00.769 "data_size": 65536 00:16:00.769 }, 00:16:00.769 { 00:16:00.769 "name": "BaseBdev2", 00:16:00.769 "uuid": "dcd17734-4a15-47b5-8ed5-0746618ffb1c", 00:16:00.769 "is_configured": true, 00:16:00.769 "data_offset": 0, 00:16:00.769 "data_size": 65536 00:16:00.769 }, 00:16:00.769 { 00:16:00.769 "name": "BaseBdev3", 00:16:00.769 "uuid": "19a3d0e7-e49d-48a7-af1d-7a94838ae4b8", 00:16:00.769 "is_configured": true, 00:16:00.769 "data_offset": 0, 00:16:00.769 "data_size": 65536 00:16:00.769 }, 00:16:00.769 { 00:16:00.769 "name": "BaseBdev4", 00:16:00.769 "uuid": "b9440c6a-b385-40a2-b1bb-aa992a1c99e3", 00:16:00.769 "is_configured": true, 00:16:00.770 "data_offset": 0, 00:16:00.770 "data_size": 65536 00:16:00.770 } 00:16:00.770 ] 00:16:00.770 } 00:16:00.770 } 00:16:00.770 }' 00:16:00.770 11:54:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:00.770 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:00.770 BaseBdev2 00:16:00.770 BaseBdev3 00:16:00.770 BaseBdev4' 00:16:00.770 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.770 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:00.770 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.770 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:00.770 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.770 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.770 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.770 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.770 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.770 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.770 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.770 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:00.770 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.770 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.770 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.770 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.029 [2024-11-27 11:54:27.266200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:01.029 [2024-11-27 11:54:27.266284] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.029 [2024-11-27 11:54:27.266402] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.029 [2024-11-27 11:54:27.266769] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.029 [2024-11-27 11:54:27.266851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82826 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82826 ']' 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82826 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82826 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:01.029 killing process with pid 82826 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82826' 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82826 00:16:01.029 [2024-11-27 11:54:27.309759] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:01.029 11:54:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82826 00:16:01.597 [2024-11-27 11:54:27.784217] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:02.972 ************************************ 00:16:02.972 END TEST raid5f_state_function_test 00:16:02.972 ************************************ 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:02.972 00:16:02.972 real 0m12.576s 00:16:02.972 user 0m19.730s 00:16:02.972 sys 0m2.330s 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.972 11:54:29 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:02.972 11:54:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:02.972 11:54:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.972 11:54:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:02.972 ************************************ 00:16:02.972 START TEST raid5f_state_function_test_sb 00:16:02.972 ************************************ 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:02.972 Process raid pid: 83516 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83516 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83516' 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83516 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83516 ']' 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.972 11:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.972 [2024-11-27 11:54:29.284242] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:16:02.972 [2024-11-27 11:54:29.284461] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.230 [2024-11-27 11:54:29.447752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.230 [2024-11-27 11:54:29.578776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.488 [2024-11-27 11:54:29.813703] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.488 [2024-11-27 11:54:29.813850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.055 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.055 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:04.055 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:04.055 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.055 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.055 [2024-11-27 11:54:30.205133] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:04.055 [2024-11-27 11:54:30.205243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:04.055 [2024-11-27 11:54:30.205286] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.055 [2024-11-27 11:54:30.205321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.055 [2024-11-27 11:54:30.205358] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:04.055 [2024-11-27 11:54:30.205386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:04.055 [2024-11-27 11:54:30.205423] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:04.055 [2024-11-27 11:54:30.205454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:04.055 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.055 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:04.055 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.055 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.055 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.055 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.055 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.055 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.055 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.055 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.056 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.056 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.056 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.056 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.056 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.056 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.056 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.056 "name": "Existed_Raid", 00:16:04.056 "uuid": "4c60554f-6a18-4b81-8b77-1fbbac740c8b", 00:16:04.056 "strip_size_kb": 64, 00:16:04.056 "state": "configuring", 00:16:04.056 "raid_level": "raid5f", 00:16:04.056 "superblock": true, 00:16:04.056 "num_base_bdevs": 4, 00:16:04.056 "num_base_bdevs_discovered": 0, 00:16:04.056 "num_base_bdevs_operational": 4, 00:16:04.056 "base_bdevs_list": [ 00:16:04.056 { 00:16:04.056 "name": "BaseBdev1", 00:16:04.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.056 "is_configured": false, 00:16:04.056 "data_offset": 0, 00:16:04.056 "data_size": 0 00:16:04.056 }, 00:16:04.056 { 00:16:04.056 "name": "BaseBdev2", 00:16:04.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.056 "is_configured": false, 00:16:04.056 "data_offset": 0, 00:16:04.056 "data_size": 0 00:16:04.056 }, 00:16:04.056 { 00:16:04.056 "name": "BaseBdev3", 00:16:04.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.056 "is_configured": false, 00:16:04.056 "data_offset": 0, 00:16:04.056 "data_size": 0 00:16:04.056 }, 00:16:04.056 { 00:16:04.056 "name": "BaseBdev4", 00:16:04.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.056 "is_configured": false, 00:16:04.056 "data_offset": 0, 00:16:04.056 "data_size": 0 00:16:04.056 } 00:16:04.056 ] 00:16:04.056 }' 00:16:04.056 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.056 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.314 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:04.314 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.314 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.314 [2024-11-27 11:54:30.676351] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:04.314 [2024-11-27 11:54:30.676471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:04.314 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.314 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:04.314 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.314 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.314 [2024-11-27 11:54:30.688329] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:04.314 [2024-11-27 11:54:30.688413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:04.314 [2024-11-27 11:54:30.688449] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.314 [2024-11-27 11:54:30.688480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.314 [2024-11-27 11:54:30.688521] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:04.314 [2024-11-27 11:54:30.688556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:04.314 [2024-11-27 11:54:30.688584] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:04.314 [2024-11-27 11:54:30.688610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:04.314 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.314 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:04.314 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.314 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.575 [2024-11-27 11:54:30.741437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.575 BaseBdev1 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.575 [ 00:16:04.575 { 00:16:04.575 "name": "BaseBdev1", 00:16:04.575 "aliases": [ 00:16:04.575 "4afa9126-648b-4ca6-8ff2-2702d26eb29a" 00:16:04.575 ], 00:16:04.575 "product_name": "Malloc disk", 00:16:04.575 "block_size": 512, 00:16:04.575 "num_blocks": 65536, 00:16:04.575 "uuid": "4afa9126-648b-4ca6-8ff2-2702d26eb29a", 00:16:04.575 "assigned_rate_limits": { 00:16:04.575 "rw_ios_per_sec": 0, 00:16:04.575 "rw_mbytes_per_sec": 0, 00:16:04.575 "r_mbytes_per_sec": 0, 00:16:04.575 "w_mbytes_per_sec": 0 00:16:04.575 }, 00:16:04.575 "claimed": true, 00:16:04.575 "claim_type": "exclusive_write", 00:16:04.575 "zoned": false, 00:16:04.575 "supported_io_types": { 00:16:04.575 "read": true, 00:16:04.575 "write": true, 00:16:04.575 "unmap": true, 00:16:04.575 "flush": true, 00:16:04.575 "reset": true, 00:16:04.575 "nvme_admin": false, 00:16:04.575 "nvme_io": false, 00:16:04.575 "nvme_io_md": false, 00:16:04.575 "write_zeroes": true, 00:16:04.575 "zcopy": true, 00:16:04.575 "get_zone_info": false, 00:16:04.575 "zone_management": false, 00:16:04.575 "zone_append": false, 00:16:04.575 "compare": false, 00:16:04.575 "compare_and_write": false, 00:16:04.575 "abort": true, 00:16:04.575 "seek_hole": false, 00:16:04.575 "seek_data": false, 00:16:04.575 "copy": true, 00:16:04.575 "nvme_iov_md": false 00:16:04.575 }, 00:16:04.575 "memory_domains": [ 00:16:04.575 { 00:16:04.575 "dma_device_id": "system", 00:16:04.575 "dma_device_type": 1 00:16:04.575 }, 00:16:04.575 { 00:16:04.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.575 "dma_device_type": 2 00:16:04.575 } 00:16:04.575 ], 00:16:04.575 "driver_specific": {} 00:16:04.575 } 00:16:04.575 ] 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.575 "name": "Existed_Raid", 00:16:04.575 "uuid": "1507aa0b-dd7d-4d36-9ad0-4478de965ae2", 00:16:04.575 "strip_size_kb": 64, 00:16:04.575 "state": "configuring", 00:16:04.575 "raid_level": "raid5f", 00:16:04.575 "superblock": true, 00:16:04.575 "num_base_bdevs": 4, 00:16:04.575 "num_base_bdevs_discovered": 1, 00:16:04.575 "num_base_bdevs_operational": 4, 00:16:04.575 "base_bdevs_list": [ 00:16:04.575 { 00:16:04.575 "name": "BaseBdev1", 00:16:04.575 "uuid": "4afa9126-648b-4ca6-8ff2-2702d26eb29a", 00:16:04.575 "is_configured": true, 00:16:04.575 "data_offset": 2048, 00:16:04.575 "data_size": 63488 00:16:04.575 }, 00:16:04.575 { 00:16:04.575 "name": "BaseBdev2", 00:16:04.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.575 "is_configured": false, 00:16:04.575 "data_offset": 0, 00:16:04.575 "data_size": 0 00:16:04.575 }, 00:16:04.575 { 00:16:04.575 "name": "BaseBdev3", 00:16:04.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.575 "is_configured": false, 00:16:04.575 "data_offset": 0, 00:16:04.575 "data_size": 0 00:16:04.575 }, 00:16:04.575 { 00:16:04.575 "name": "BaseBdev4", 00:16:04.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.575 "is_configured": false, 00:16:04.575 "data_offset": 0, 00:16:04.575 "data_size": 0 00:16:04.575 } 00:16:04.575 ] 00:16:04.575 }' 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.575 11:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.851 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:04.851 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.851 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.118 [2024-11-27 11:54:31.236733] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:05.118 [2024-11-27 11:54:31.236875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.118 [2024-11-27 11:54:31.248793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.118 [2024-11-27 11:54:31.250940] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:05.118 [2024-11-27 11:54:31.250983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:05.118 [2024-11-27 11:54:31.250995] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:05.118 [2024-11-27 11:54:31.251007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:05.118 [2024-11-27 11:54:31.251015] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:05.118 [2024-11-27 11:54:31.251025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.118 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.119 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.119 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.119 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.119 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.119 "name": "Existed_Raid", 00:16:05.119 "uuid": "3599dad9-07d1-40e7-8f11-e49cad1851bc", 00:16:05.119 "strip_size_kb": 64, 00:16:05.119 "state": "configuring", 00:16:05.119 "raid_level": "raid5f", 00:16:05.119 "superblock": true, 00:16:05.119 "num_base_bdevs": 4, 00:16:05.119 "num_base_bdevs_discovered": 1, 00:16:05.119 "num_base_bdevs_operational": 4, 00:16:05.119 "base_bdevs_list": [ 00:16:05.119 { 00:16:05.119 "name": "BaseBdev1", 00:16:05.119 "uuid": "4afa9126-648b-4ca6-8ff2-2702d26eb29a", 00:16:05.119 "is_configured": true, 00:16:05.119 "data_offset": 2048, 00:16:05.119 "data_size": 63488 00:16:05.119 }, 00:16:05.119 { 00:16:05.119 "name": "BaseBdev2", 00:16:05.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.119 "is_configured": false, 00:16:05.119 "data_offset": 0, 00:16:05.119 "data_size": 0 00:16:05.119 }, 00:16:05.119 { 00:16:05.119 "name": "BaseBdev3", 00:16:05.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.119 "is_configured": false, 00:16:05.119 "data_offset": 0, 00:16:05.119 "data_size": 0 00:16:05.119 }, 00:16:05.119 { 00:16:05.119 "name": "BaseBdev4", 00:16:05.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.119 "is_configured": false, 00:16:05.119 "data_offset": 0, 00:16:05.119 "data_size": 0 00:16:05.119 } 00:16:05.119 ] 00:16:05.119 }' 00:16:05.119 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.119 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.377 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:05.377 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.377 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.637 [2024-11-27 11:54:31.761502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:05.637 BaseBdev2 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.637 [ 00:16:05.637 { 00:16:05.637 "name": "BaseBdev2", 00:16:05.637 "aliases": [ 00:16:05.637 "f6ccaa19-b5db-40b5-98b0-520db60f694d" 00:16:05.637 ], 00:16:05.637 "product_name": "Malloc disk", 00:16:05.637 "block_size": 512, 00:16:05.637 "num_blocks": 65536, 00:16:05.637 "uuid": "f6ccaa19-b5db-40b5-98b0-520db60f694d", 00:16:05.637 "assigned_rate_limits": { 00:16:05.637 "rw_ios_per_sec": 0, 00:16:05.637 "rw_mbytes_per_sec": 0, 00:16:05.637 "r_mbytes_per_sec": 0, 00:16:05.637 "w_mbytes_per_sec": 0 00:16:05.637 }, 00:16:05.637 "claimed": true, 00:16:05.637 "claim_type": "exclusive_write", 00:16:05.637 "zoned": false, 00:16:05.637 "supported_io_types": { 00:16:05.637 "read": true, 00:16:05.637 "write": true, 00:16:05.637 "unmap": true, 00:16:05.637 "flush": true, 00:16:05.637 "reset": true, 00:16:05.637 "nvme_admin": false, 00:16:05.637 "nvme_io": false, 00:16:05.637 "nvme_io_md": false, 00:16:05.637 "write_zeroes": true, 00:16:05.637 "zcopy": true, 00:16:05.637 "get_zone_info": false, 00:16:05.637 "zone_management": false, 00:16:05.637 "zone_append": false, 00:16:05.637 "compare": false, 00:16:05.637 "compare_and_write": false, 00:16:05.637 "abort": true, 00:16:05.637 "seek_hole": false, 00:16:05.637 "seek_data": false, 00:16:05.637 "copy": true, 00:16:05.637 "nvme_iov_md": false 00:16:05.637 }, 00:16:05.637 "memory_domains": [ 00:16:05.637 { 00:16:05.637 "dma_device_id": "system", 00:16:05.637 "dma_device_type": 1 00:16:05.637 }, 00:16:05.637 { 00:16:05.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.637 "dma_device_type": 2 00:16:05.637 } 00:16:05.637 ], 00:16:05.637 "driver_specific": {} 00:16:05.637 } 00:16:05.637 ] 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.637 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.637 "name": "Existed_Raid", 00:16:05.637 "uuid": "3599dad9-07d1-40e7-8f11-e49cad1851bc", 00:16:05.637 "strip_size_kb": 64, 00:16:05.637 "state": "configuring", 00:16:05.638 "raid_level": "raid5f", 00:16:05.638 "superblock": true, 00:16:05.638 "num_base_bdevs": 4, 00:16:05.638 "num_base_bdevs_discovered": 2, 00:16:05.638 "num_base_bdevs_operational": 4, 00:16:05.638 "base_bdevs_list": [ 00:16:05.638 { 00:16:05.638 "name": "BaseBdev1", 00:16:05.638 "uuid": "4afa9126-648b-4ca6-8ff2-2702d26eb29a", 00:16:05.638 "is_configured": true, 00:16:05.638 "data_offset": 2048, 00:16:05.638 "data_size": 63488 00:16:05.638 }, 00:16:05.638 { 00:16:05.638 "name": "BaseBdev2", 00:16:05.638 "uuid": "f6ccaa19-b5db-40b5-98b0-520db60f694d", 00:16:05.638 "is_configured": true, 00:16:05.638 "data_offset": 2048, 00:16:05.638 "data_size": 63488 00:16:05.638 }, 00:16:05.638 { 00:16:05.638 "name": "BaseBdev3", 00:16:05.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.638 "is_configured": false, 00:16:05.638 "data_offset": 0, 00:16:05.638 "data_size": 0 00:16:05.638 }, 00:16:05.638 { 00:16:05.638 "name": "BaseBdev4", 00:16:05.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.638 "is_configured": false, 00:16:05.638 "data_offset": 0, 00:16:05.638 "data_size": 0 00:16:05.638 } 00:16:05.638 ] 00:16:05.638 }' 00:16:05.638 11:54:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.638 11:54:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.897 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:05.897 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.897 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.157 [2024-11-27 11:54:32.295192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.157 BaseBdev3 00:16:06.157 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.157 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:06.157 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:06.157 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:06.157 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:06.157 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:06.157 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:06.157 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:06.157 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.157 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.157 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.157 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:06.157 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.157 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.157 [ 00:16:06.157 { 00:16:06.157 "name": "BaseBdev3", 00:16:06.157 "aliases": [ 00:16:06.157 "4a86f58a-e941-4d2c-811c-a10c1d81ab99" 00:16:06.157 ], 00:16:06.157 "product_name": "Malloc disk", 00:16:06.157 "block_size": 512, 00:16:06.157 "num_blocks": 65536, 00:16:06.157 "uuid": "4a86f58a-e941-4d2c-811c-a10c1d81ab99", 00:16:06.157 "assigned_rate_limits": { 00:16:06.157 "rw_ios_per_sec": 0, 00:16:06.157 "rw_mbytes_per_sec": 0, 00:16:06.157 "r_mbytes_per_sec": 0, 00:16:06.157 "w_mbytes_per_sec": 0 00:16:06.157 }, 00:16:06.157 "claimed": true, 00:16:06.157 "claim_type": "exclusive_write", 00:16:06.157 "zoned": false, 00:16:06.157 "supported_io_types": { 00:16:06.157 "read": true, 00:16:06.157 "write": true, 00:16:06.157 "unmap": true, 00:16:06.157 "flush": true, 00:16:06.157 "reset": true, 00:16:06.157 "nvme_admin": false, 00:16:06.157 "nvme_io": false, 00:16:06.157 "nvme_io_md": false, 00:16:06.157 "write_zeroes": true, 00:16:06.157 "zcopy": true, 00:16:06.157 "get_zone_info": false, 00:16:06.157 "zone_management": false, 00:16:06.157 "zone_append": false, 00:16:06.157 "compare": false, 00:16:06.157 "compare_and_write": false, 00:16:06.157 "abort": true, 00:16:06.157 "seek_hole": false, 00:16:06.157 "seek_data": false, 00:16:06.158 "copy": true, 00:16:06.158 "nvme_iov_md": false 00:16:06.158 }, 00:16:06.158 "memory_domains": [ 00:16:06.158 { 00:16:06.158 "dma_device_id": "system", 00:16:06.158 "dma_device_type": 1 00:16:06.158 }, 00:16:06.158 { 00:16:06.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.158 "dma_device_type": 2 00:16:06.158 } 00:16:06.158 ], 00:16:06.158 "driver_specific": {} 00:16:06.158 } 00:16:06.158 ] 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.158 "name": "Existed_Raid", 00:16:06.158 "uuid": "3599dad9-07d1-40e7-8f11-e49cad1851bc", 00:16:06.158 "strip_size_kb": 64, 00:16:06.158 "state": "configuring", 00:16:06.158 "raid_level": "raid5f", 00:16:06.158 "superblock": true, 00:16:06.158 "num_base_bdevs": 4, 00:16:06.158 "num_base_bdevs_discovered": 3, 00:16:06.158 "num_base_bdevs_operational": 4, 00:16:06.158 "base_bdevs_list": [ 00:16:06.158 { 00:16:06.158 "name": "BaseBdev1", 00:16:06.158 "uuid": "4afa9126-648b-4ca6-8ff2-2702d26eb29a", 00:16:06.158 "is_configured": true, 00:16:06.158 "data_offset": 2048, 00:16:06.158 "data_size": 63488 00:16:06.158 }, 00:16:06.158 { 00:16:06.158 "name": "BaseBdev2", 00:16:06.158 "uuid": "f6ccaa19-b5db-40b5-98b0-520db60f694d", 00:16:06.158 "is_configured": true, 00:16:06.158 "data_offset": 2048, 00:16:06.158 "data_size": 63488 00:16:06.158 }, 00:16:06.158 { 00:16:06.158 "name": "BaseBdev3", 00:16:06.158 "uuid": "4a86f58a-e941-4d2c-811c-a10c1d81ab99", 00:16:06.158 "is_configured": true, 00:16:06.158 "data_offset": 2048, 00:16:06.158 "data_size": 63488 00:16:06.158 }, 00:16:06.158 { 00:16:06.158 "name": "BaseBdev4", 00:16:06.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.158 "is_configured": false, 00:16:06.158 "data_offset": 0, 00:16:06.158 "data_size": 0 00:16:06.158 } 00:16:06.158 ] 00:16:06.158 }' 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.158 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.416 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:06.416 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.416 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.675 [2024-11-27 11:54:32.827414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:06.675 [2024-11-27 11:54:32.827750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:06.675 [2024-11-27 11:54:32.827767] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:06.675 [2024-11-27 11:54:32.828100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:06.675 BaseBdev4 00:16:06.675 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.675 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:06.675 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:06.675 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:06.675 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:06.675 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:06.675 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:06.675 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:06.675 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.675 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.675 [2024-11-27 11:54:32.837040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:06.675 [2024-11-27 11:54:32.837123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:06.675 [2024-11-27 11:54:32.837461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.675 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.676 [ 00:16:06.676 { 00:16:06.676 "name": "BaseBdev4", 00:16:06.676 "aliases": [ 00:16:06.676 "11f0f4ea-a22c-4a1b-a3c2-cd03cdbed9f5" 00:16:06.676 ], 00:16:06.676 "product_name": "Malloc disk", 00:16:06.676 "block_size": 512, 00:16:06.676 "num_blocks": 65536, 00:16:06.676 "uuid": "11f0f4ea-a22c-4a1b-a3c2-cd03cdbed9f5", 00:16:06.676 "assigned_rate_limits": { 00:16:06.676 "rw_ios_per_sec": 0, 00:16:06.676 "rw_mbytes_per_sec": 0, 00:16:06.676 "r_mbytes_per_sec": 0, 00:16:06.676 "w_mbytes_per_sec": 0 00:16:06.676 }, 00:16:06.676 "claimed": true, 00:16:06.676 "claim_type": "exclusive_write", 00:16:06.676 "zoned": false, 00:16:06.676 "supported_io_types": { 00:16:06.676 "read": true, 00:16:06.676 "write": true, 00:16:06.676 "unmap": true, 00:16:06.676 "flush": true, 00:16:06.676 "reset": true, 00:16:06.676 "nvme_admin": false, 00:16:06.676 "nvme_io": false, 00:16:06.676 "nvme_io_md": false, 00:16:06.676 "write_zeroes": true, 00:16:06.676 "zcopy": true, 00:16:06.676 "get_zone_info": false, 00:16:06.676 "zone_management": false, 00:16:06.676 "zone_append": false, 00:16:06.676 "compare": false, 00:16:06.676 "compare_and_write": false, 00:16:06.676 "abort": true, 00:16:06.676 "seek_hole": false, 00:16:06.676 "seek_data": false, 00:16:06.676 "copy": true, 00:16:06.676 "nvme_iov_md": false 00:16:06.676 }, 00:16:06.676 "memory_domains": [ 00:16:06.676 { 00:16:06.676 "dma_device_id": "system", 00:16:06.676 "dma_device_type": 1 00:16:06.676 }, 00:16:06.676 { 00:16:06.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.676 "dma_device_type": 2 00:16:06.676 } 00:16:06.676 ], 00:16:06.676 "driver_specific": {} 00:16:06.676 } 00:16:06.676 ] 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.676 "name": "Existed_Raid", 00:16:06.676 "uuid": "3599dad9-07d1-40e7-8f11-e49cad1851bc", 00:16:06.676 "strip_size_kb": 64, 00:16:06.676 "state": "online", 00:16:06.676 "raid_level": "raid5f", 00:16:06.676 "superblock": true, 00:16:06.676 "num_base_bdevs": 4, 00:16:06.676 "num_base_bdevs_discovered": 4, 00:16:06.676 "num_base_bdevs_operational": 4, 00:16:06.676 "base_bdevs_list": [ 00:16:06.676 { 00:16:06.676 "name": "BaseBdev1", 00:16:06.676 "uuid": "4afa9126-648b-4ca6-8ff2-2702d26eb29a", 00:16:06.676 "is_configured": true, 00:16:06.676 "data_offset": 2048, 00:16:06.676 "data_size": 63488 00:16:06.676 }, 00:16:06.676 { 00:16:06.676 "name": "BaseBdev2", 00:16:06.676 "uuid": "f6ccaa19-b5db-40b5-98b0-520db60f694d", 00:16:06.676 "is_configured": true, 00:16:06.676 "data_offset": 2048, 00:16:06.676 "data_size": 63488 00:16:06.676 }, 00:16:06.676 { 00:16:06.676 "name": "BaseBdev3", 00:16:06.676 "uuid": "4a86f58a-e941-4d2c-811c-a10c1d81ab99", 00:16:06.676 "is_configured": true, 00:16:06.676 "data_offset": 2048, 00:16:06.676 "data_size": 63488 00:16:06.676 }, 00:16:06.676 { 00:16:06.676 "name": "BaseBdev4", 00:16:06.676 "uuid": "11f0f4ea-a22c-4a1b-a3c2-cd03cdbed9f5", 00:16:06.676 "is_configured": true, 00:16:06.676 "data_offset": 2048, 00:16:06.676 "data_size": 63488 00:16:06.676 } 00:16:06.676 ] 00:16:06.676 }' 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.676 11:54:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.243 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:07.243 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:07.243 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:07.243 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:07.243 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:07.243 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:07.243 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:07.243 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:07.243 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.243 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.243 [2024-11-27 11:54:33.374894] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:07.243 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.243 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:07.243 "name": "Existed_Raid", 00:16:07.243 "aliases": [ 00:16:07.243 "3599dad9-07d1-40e7-8f11-e49cad1851bc" 00:16:07.243 ], 00:16:07.243 "product_name": "Raid Volume", 00:16:07.243 "block_size": 512, 00:16:07.243 "num_blocks": 190464, 00:16:07.243 "uuid": "3599dad9-07d1-40e7-8f11-e49cad1851bc", 00:16:07.243 "assigned_rate_limits": { 00:16:07.243 "rw_ios_per_sec": 0, 00:16:07.243 "rw_mbytes_per_sec": 0, 00:16:07.243 "r_mbytes_per_sec": 0, 00:16:07.243 "w_mbytes_per_sec": 0 00:16:07.243 }, 00:16:07.243 "claimed": false, 00:16:07.243 "zoned": false, 00:16:07.243 "supported_io_types": { 00:16:07.243 "read": true, 00:16:07.243 "write": true, 00:16:07.243 "unmap": false, 00:16:07.243 "flush": false, 00:16:07.243 "reset": true, 00:16:07.243 "nvme_admin": false, 00:16:07.243 "nvme_io": false, 00:16:07.243 "nvme_io_md": false, 00:16:07.243 "write_zeroes": true, 00:16:07.243 "zcopy": false, 00:16:07.243 "get_zone_info": false, 00:16:07.243 "zone_management": false, 00:16:07.243 "zone_append": false, 00:16:07.243 "compare": false, 00:16:07.243 "compare_and_write": false, 00:16:07.243 "abort": false, 00:16:07.243 "seek_hole": false, 00:16:07.243 "seek_data": false, 00:16:07.243 "copy": false, 00:16:07.243 "nvme_iov_md": false 00:16:07.243 }, 00:16:07.243 "driver_specific": { 00:16:07.243 "raid": { 00:16:07.243 "uuid": "3599dad9-07d1-40e7-8f11-e49cad1851bc", 00:16:07.243 "strip_size_kb": 64, 00:16:07.243 "state": "online", 00:16:07.243 "raid_level": "raid5f", 00:16:07.243 "superblock": true, 00:16:07.243 "num_base_bdevs": 4, 00:16:07.243 "num_base_bdevs_discovered": 4, 00:16:07.243 "num_base_bdevs_operational": 4, 00:16:07.243 "base_bdevs_list": [ 00:16:07.243 { 00:16:07.243 "name": "BaseBdev1", 00:16:07.243 "uuid": "4afa9126-648b-4ca6-8ff2-2702d26eb29a", 00:16:07.243 "is_configured": true, 00:16:07.243 "data_offset": 2048, 00:16:07.243 "data_size": 63488 00:16:07.243 }, 00:16:07.243 { 00:16:07.243 "name": "BaseBdev2", 00:16:07.243 "uuid": "f6ccaa19-b5db-40b5-98b0-520db60f694d", 00:16:07.243 "is_configured": true, 00:16:07.243 "data_offset": 2048, 00:16:07.243 "data_size": 63488 00:16:07.243 }, 00:16:07.243 { 00:16:07.243 "name": "BaseBdev3", 00:16:07.243 "uuid": "4a86f58a-e941-4d2c-811c-a10c1d81ab99", 00:16:07.243 "is_configured": true, 00:16:07.243 "data_offset": 2048, 00:16:07.243 "data_size": 63488 00:16:07.243 }, 00:16:07.243 { 00:16:07.243 "name": "BaseBdev4", 00:16:07.243 "uuid": "11f0f4ea-a22c-4a1b-a3c2-cd03cdbed9f5", 00:16:07.243 "is_configured": true, 00:16:07.243 "data_offset": 2048, 00:16:07.243 "data_size": 63488 00:16:07.243 } 00:16:07.243 ] 00:16:07.243 } 00:16:07.244 } 00:16:07.244 }' 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:07.244 BaseBdev2 00:16:07.244 BaseBdev3 00:16:07.244 BaseBdev4' 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.244 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.503 [2024-11-27 11:54:33.690155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.503 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.503 "name": "Existed_Raid", 00:16:07.503 "uuid": "3599dad9-07d1-40e7-8f11-e49cad1851bc", 00:16:07.503 "strip_size_kb": 64, 00:16:07.503 "state": "online", 00:16:07.503 "raid_level": "raid5f", 00:16:07.503 "superblock": true, 00:16:07.503 "num_base_bdevs": 4, 00:16:07.503 "num_base_bdevs_discovered": 3, 00:16:07.503 "num_base_bdevs_operational": 3, 00:16:07.503 "base_bdevs_list": [ 00:16:07.503 { 00:16:07.503 "name": null, 00:16:07.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.503 "is_configured": false, 00:16:07.503 "data_offset": 0, 00:16:07.503 "data_size": 63488 00:16:07.503 }, 00:16:07.503 { 00:16:07.503 "name": "BaseBdev2", 00:16:07.503 "uuid": "f6ccaa19-b5db-40b5-98b0-520db60f694d", 00:16:07.503 "is_configured": true, 00:16:07.503 "data_offset": 2048, 00:16:07.503 "data_size": 63488 00:16:07.503 }, 00:16:07.503 { 00:16:07.503 "name": "BaseBdev3", 00:16:07.503 "uuid": "4a86f58a-e941-4d2c-811c-a10c1d81ab99", 00:16:07.503 "is_configured": true, 00:16:07.503 "data_offset": 2048, 00:16:07.503 "data_size": 63488 00:16:07.503 }, 00:16:07.503 { 00:16:07.503 "name": "BaseBdev4", 00:16:07.503 "uuid": "11f0f4ea-a22c-4a1b-a3c2-cd03cdbed9f5", 00:16:07.503 "is_configured": true, 00:16:07.504 "data_offset": 2048, 00:16:07.504 "data_size": 63488 00:16:07.504 } 00:16:07.504 ] 00:16:07.504 }' 00:16:07.504 11:54:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.504 11:54:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.071 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:08.071 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:08.071 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.071 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:08.071 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.071 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.071 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.071 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:08.071 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:08.071 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:08.071 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.071 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.071 [2024-11-27 11:54:34.327518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:08.071 [2024-11-27 11:54:34.327702] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:08.071 [2024-11-27 11:54:34.444908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.071 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.071 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:08.072 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:08.072 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.072 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.072 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.072 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.331 [2024-11-27 11:54:34.508830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.331 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.331 [2024-11-27 11:54:34.683518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:08.331 [2024-11-27 11:54:34.683638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.591 BaseBdev2 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.591 [ 00:16:08.591 { 00:16:08.591 "name": "BaseBdev2", 00:16:08.591 "aliases": [ 00:16:08.591 "c7e45593-bea7-4deb-ac39-fd1c4419a6e7" 00:16:08.591 ], 00:16:08.591 "product_name": "Malloc disk", 00:16:08.591 "block_size": 512, 00:16:08.591 "num_blocks": 65536, 00:16:08.591 "uuid": "c7e45593-bea7-4deb-ac39-fd1c4419a6e7", 00:16:08.591 "assigned_rate_limits": { 00:16:08.591 "rw_ios_per_sec": 0, 00:16:08.591 "rw_mbytes_per_sec": 0, 00:16:08.591 "r_mbytes_per_sec": 0, 00:16:08.591 "w_mbytes_per_sec": 0 00:16:08.591 }, 00:16:08.591 "claimed": false, 00:16:08.591 "zoned": false, 00:16:08.591 "supported_io_types": { 00:16:08.591 "read": true, 00:16:08.591 "write": true, 00:16:08.591 "unmap": true, 00:16:08.591 "flush": true, 00:16:08.591 "reset": true, 00:16:08.591 "nvme_admin": false, 00:16:08.591 "nvme_io": false, 00:16:08.591 "nvme_io_md": false, 00:16:08.591 "write_zeroes": true, 00:16:08.591 "zcopy": true, 00:16:08.591 "get_zone_info": false, 00:16:08.591 "zone_management": false, 00:16:08.591 "zone_append": false, 00:16:08.591 "compare": false, 00:16:08.591 "compare_and_write": false, 00:16:08.591 "abort": true, 00:16:08.591 "seek_hole": false, 00:16:08.591 "seek_data": false, 00:16:08.591 "copy": true, 00:16:08.591 "nvme_iov_md": false 00:16:08.591 }, 00:16:08.591 "memory_domains": [ 00:16:08.591 { 00:16:08.591 "dma_device_id": "system", 00:16:08.591 "dma_device_type": 1 00:16:08.591 }, 00:16:08.591 { 00:16:08.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.591 "dma_device_type": 2 00:16:08.591 } 00:16:08.591 ], 00:16:08.591 "driver_specific": {} 00:16:08.591 } 00:16:08.591 ] 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.591 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.851 BaseBdev3 00:16:08.851 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.851 11:54:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:08.851 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:08.851 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:08.851 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:08.851 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:08.851 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:08.851 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:08.851 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.851 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.851 11:54:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.851 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:08.851 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.851 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.851 [ 00:16:08.851 { 00:16:08.851 "name": "BaseBdev3", 00:16:08.851 "aliases": [ 00:16:08.851 "f0842ea9-8ef5-4242-97a1-0702b09966f4" 00:16:08.851 ], 00:16:08.851 "product_name": "Malloc disk", 00:16:08.851 "block_size": 512, 00:16:08.851 "num_blocks": 65536, 00:16:08.851 "uuid": "f0842ea9-8ef5-4242-97a1-0702b09966f4", 00:16:08.851 "assigned_rate_limits": { 00:16:08.851 "rw_ios_per_sec": 0, 00:16:08.851 "rw_mbytes_per_sec": 0, 00:16:08.851 "r_mbytes_per_sec": 0, 00:16:08.851 "w_mbytes_per_sec": 0 00:16:08.851 }, 00:16:08.851 "claimed": false, 00:16:08.851 "zoned": false, 00:16:08.851 "supported_io_types": { 00:16:08.851 "read": true, 00:16:08.851 "write": true, 00:16:08.851 "unmap": true, 00:16:08.851 "flush": true, 00:16:08.851 "reset": true, 00:16:08.851 "nvme_admin": false, 00:16:08.851 "nvme_io": false, 00:16:08.851 "nvme_io_md": false, 00:16:08.851 "write_zeroes": true, 00:16:08.851 "zcopy": true, 00:16:08.851 "get_zone_info": false, 00:16:08.851 "zone_management": false, 00:16:08.851 "zone_append": false, 00:16:08.851 "compare": false, 00:16:08.851 "compare_and_write": false, 00:16:08.851 "abort": true, 00:16:08.851 "seek_hole": false, 00:16:08.851 "seek_data": false, 00:16:08.851 "copy": true, 00:16:08.852 "nvme_iov_md": false 00:16:08.852 }, 00:16:08.852 "memory_domains": [ 00:16:08.852 { 00:16:08.852 "dma_device_id": "system", 00:16:08.852 "dma_device_type": 1 00:16:08.852 }, 00:16:08.852 { 00:16:08.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.852 "dma_device_type": 2 00:16:08.852 } 00:16:08.852 ], 00:16:08.852 "driver_specific": {} 00:16:08.852 } 00:16:08.852 ] 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.852 BaseBdev4 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.852 [ 00:16:08.852 { 00:16:08.852 "name": "BaseBdev4", 00:16:08.852 "aliases": [ 00:16:08.852 "33d76956-b949-4e54-b32a-f6543826a0b9" 00:16:08.852 ], 00:16:08.852 "product_name": "Malloc disk", 00:16:08.852 "block_size": 512, 00:16:08.852 "num_blocks": 65536, 00:16:08.852 "uuid": "33d76956-b949-4e54-b32a-f6543826a0b9", 00:16:08.852 "assigned_rate_limits": { 00:16:08.852 "rw_ios_per_sec": 0, 00:16:08.852 "rw_mbytes_per_sec": 0, 00:16:08.852 "r_mbytes_per_sec": 0, 00:16:08.852 "w_mbytes_per_sec": 0 00:16:08.852 }, 00:16:08.852 "claimed": false, 00:16:08.852 "zoned": false, 00:16:08.852 "supported_io_types": { 00:16:08.852 "read": true, 00:16:08.852 "write": true, 00:16:08.852 "unmap": true, 00:16:08.852 "flush": true, 00:16:08.852 "reset": true, 00:16:08.852 "nvme_admin": false, 00:16:08.852 "nvme_io": false, 00:16:08.852 "nvme_io_md": false, 00:16:08.852 "write_zeroes": true, 00:16:08.852 "zcopy": true, 00:16:08.852 "get_zone_info": false, 00:16:08.852 "zone_management": false, 00:16:08.852 "zone_append": false, 00:16:08.852 "compare": false, 00:16:08.852 "compare_and_write": false, 00:16:08.852 "abort": true, 00:16:08.852 "seek_hole": false, 00:16:08.852 "seek_data": false, 00:16:08.852 "copy": true, 00:16:08.852 "nvme_iov_md": false 00:16:08.852 }, 00:16:08.852 "memory_domains": [ 00:16:08.852 { 00:16:08.852 "dma_device_id": "system", 00:16:08.852 "dma_device_type": 1 00:16:08.852 }, 00:16:08.852 { 00:16:08.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.852 "dma_device_type": 2 00:16:08.852 } 00:16:08.852 ], 00:16:08.852 "driver_specific": {} 00:16:08.852 } 00:16:08.852 ] 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.852 [2024-11-27 11:54:35.116100] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:08.852 [2024-11-27 11:54:35.116196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:08.852 [2024-11-27 11:54:35.116249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:08.852 [2024-11-27 11:54:35.118359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:08.852 [2024-11-27 11:54:35.118464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.852 "name": "Existed_Raid", 00:16:08.852 "uuid": "8a7af690-cda8-4b28-a9f4-841f2f825be9", 00:16:08.852 "strip_size_kb": 64, 00:16:08.852 "state": "configuring", 00:16:08.852 "raid_level": "raid5f", 00:16:08.852 "superblock": true, 00:16:08.852 "num_base_bdevs": 4, 00:16:08.852 "num_base_bdevs_discovered": 3, 00:16:08.852 "num_base_bdevs_operational": 4, 00:16:08.852 "base_bdevs_list": [ 00:16:08.852 { 00:16:08.852 "name": "BaseBdev1", 00:16:08.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.852 "is_configured": false, 00:16:08.852 "data_offset": 0, 00:16:08.852 "data_size": 0 00:16:08.852 }, 00:16:08.852 { 00:16:08.852 "name": "BaseBdev2", 00:16:08.852 "uuid": "c7e45593-bea7-4deb-ac39-fd1c4419a6e7", 00:16:08.852 "is_configured": true, 00:16:08.852 "data_offset": 2048, 00:16:08.852 "data_size": 63488 00:16:08.852 }, 00:16:08.852 { 00:16:08.852 "name": "BaseBdev3", 00:16:08.852 "uuid": "f0842ea9-8ef5-4242-97a1-0702b09966f4", 00:16:08.852 "is_configured": true, 00:16:08.852 "data_offset": 2048, 00:16:08.852 "data_size": 63488 00:16:08.852 }, 00:16:08.852 { 00:16:08.852 "name": "BaseBdev4", 00:16:08.852 "uuid": "33d76956-b949-4e54-b32a-f6543826a0b9", 00:16:08.852 "is_configured": true, 00:16:08.852 "data_offset": 2048, 00:16:08.852 "data_size": 63488 00:16:08.852 } 00:16:08.852 ] 00:16:08.852 }' 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.852 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.421 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:09.421 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.421 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.421 [2024-11-27 11:54:35.619309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:09.421 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.421 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.421 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.421 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.421 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.421 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.421 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.421 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.421 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.421 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.421 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.421 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.422 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.422 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.422 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.422 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.422 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.422 "name": "Existed_Raid", 00:16:09.422 "uuid": "8a7af690-cda8-4b28-a9f4-841f2f825be9", 00:16:09.422 "strip_size_kb": 64, 00:16:09.422 "state": "configuring", 00:16:09.422 "raid_level": "raid5f", 00:16:09.422 "superblock": true, 00:16:09.422 "num_base_bdevs": 4, 00:16:09.422 "num_base_bdevs_discovered": 2, 00:16:09.422 "num_base_bdevs_operational": 4, 00:16:09.422 "base_bdevs_list": [ 00:16:09.422 { 00:16:09.422 "name": "BaseBdev1", 00:16:09.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.422 "is_configured": false, 00:16:09.422 "data_offset": 0, 00:16:09.422 "data_size": 0 00:16:09.422 }, 00:16:09.422 { 00:16:09.422 "name": null, 00:16:09.422 "uuid": "c7e45593-bea7-4deb-ac39-fd1c4419a6e7", 00:16:09.422 "is_configured": false, 00:16:09.422 "data_offset": 0, 00:16:09.422 "data_size": 63488 00:16:09.422 }, 00:16:09.422 { 00:16:09.422 "name": "BaseBdev3", 00:16:09.422 "uuid": "f0842ea9-8ef5-4242-97a1-0702b09966f4", 00:16:09.422 "is_configured": true, 00:16:09.422 "data_offset": 2048, 00:16:09.422 "data_size": 63488 00:16:09.422 }, 00:16:09.422 { 00:16:09.422 "name": "BaseBdev4", 00:16:09.422 "uuid": "33d76956-b949-4e54-b32a-f6543826a0b9", 00:16:09.422 "is_configured": true, 00:16:09.422 "data_offset": 2048, 00:16:09.422 "data_size": 63488 00:16:09.422 } 00:16:09.422 ] 00:16:09.422 }' 00:16:09.422 11:54:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.422 11:54:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.990 [2024-11-27 11:54:36.200217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.990 BaseBdev1 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.990 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.991 [ 00:16:09.991 { 00:16:09.991 "name": "BaseBdev1", 00:16:09.991 "aliases": [ 00:16:09.991 "714bb007-1912-47df-b738-b97879ea594a" 00:16:09.991 ], 00:16:09.991 "product_name": "Malloc disk", 00:16:09.991 "block_size": 512, 00:16:09.991 "num_blocks": 65536, 00:16:09.991 "uuid": "714bb007-1912-47df-b738-b97879ea594a", 00:16:09.991 "assigned_rate_limits": { 00:16:09.991 "rw_ios_per_sec": 0, 00:16:09.991 "rw_mbytes_per_sec": 0, 00:16:09.991 "r_mbytes_per_sec": 0, 00:16:09.991 "w_mbytes_per_sec": 0 00:16:09.991 }, 00:16:09.991 "claimed": true, 00:16:09.991 "claim_type": "exclusive_write", 00:16:09.991 "zoned": false, 00:16:09.991 "supported_io_types": { 00:16:09.991 "read": true, 00:16:09.991 "write": true, 00:16:09.991 "unmap": true, 00:16:09.991 "flush": true, 00:16:09.991 "reset": true, 00:16:09.991 "nvme_admin": false, 00:16:09.991 "nvme_io": false, 00:16:09.991 "nvme_io_md": false, 00:16:09.991 "write_zeroes": true, 00:16:09.991 "zcopy": true, 00:16:09.991 "get_zone_info": false, 00:16:09.991 "zone_management": false, 00:16:09.991 "zone_append": false, 00:16:09.991 "compare": false, 00:16:09.991 "compare_and_write": false, 00:16:09.991 "abort": true, 00:16:09.991 "seek_hole": false, 00:16:09.991 "seek_data": false, 00:16:09.991 "copy": true, 00:16:09.991 "nvme_iov_md": false 00:16:09.991 }, 00:16:09.991 "memory_domains": [ 00:16:09.991 { 00:16:09.991 "dma_device_id": "system", 00:16:09.991 "dma_device_type": 1 00:16:09.991 }, 00:16:09.991 { 00:16:09.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.991 "dma_device_type": 2 00:16:09.991 } 00:16:09.991 ], 00:16:09.991 "driver_specific": {} 00:16:09.991 } 00:16:09.991 ] 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.991 "name": "Existed_Raid", 00:16:09.991 "uuid": "8a7af690-cda8-4b28-a9f4-841f2f825be9", 00:16:09.991 "strip_size_kb": 64, 00:16:09.991 "state": "configuring", 00:16:09.991 "raid_level": "raid5f", 00:16:09.991 "superblock": true, 00:16:09.991 "num_base_bdevs": 4, 00:16:09.991 "num_base_bdevs_discovered": 3, 00:16:09.991 "num_base_bdevs_operational": 4, 00:16:09.991 "base_bdevs_list": [ 00:16:09.991 { 00:16:09.991 "name": "BaseBdev1", 00:16:09.991 "uuid": "714bb007-1912-47df-b738-b97879ea594a", 00:16:09.991 "is_configured": true, 00:16:09.991 "data_offset": 2048, 00:16:09.991 "data_size": 63488 00:16:09.991 }, 00:16:09.991 { 00:16:09.991 "name": null, 00:16:09.991 "uuid": "c7e45593-bea7-4deb-ac39-fd1c4419a6e7", 00:16:09.991 "is_configured": false, 00:16:09.991 "data_offset": 0, 00:16:09.991 "data_size": 63488 00:16:09.991 }, 00:16:09.991 { 00:16:09.991 "name": "BaseBdev3", 00:16:09.991 "uuid": "f0842ea9-8ef5-4242-97a1-0702b09966f4", 00:16:09.991 "is_configured": true, 00:16:09.991 "data_offset": 2048, 00:16:09.991 "data_size": 63488 00:16:09.991 }, 00:16:09.991 { 00:16:09.991 "name": "BaseBdev4", 00:16:09.991 "uuid": "33d76956-b949-4e54-b32a-f6543826a0b9", 00:16:09.991 "is_configured": true, 00:16:09.991 "data_offset": 2048, 00:16:09.991 "data_size": 63488 00:16:09.991 } 00:16:09.991 ] 00:16:09.991 }' 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.991 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.591 [2024-11-27 11:54:36.759429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.591 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.592 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.592 "name": "Existed_Raid", 00:16:10.592 "uuid": "8a7af690-cda8-4b28-a9f4-841f2f825be9", 00:16:10.592 "strip_size_kb": 64, 00:16:10.592 "state": "configuring", 00:16:10.592 "raid_level": "raid5f", 00:16:10.592 "superblock": true, 00:16:10.592 "num_base_bdevs": 4, 00:16:10.592 "num_base_bdevs_discovered": 2, 00:16:10.592 "num_base_bdevs_operational": 4, 00:16:10.592 "base_bdevs_list": [ 00:16:10.592 { 00:16:10.592 "name": "BaseBdev1", 00:16:10.592 "uuid": "714bb007-1912-47df-b738-b97879ea594a", 00:16:10.592 "is_configured": true, 00:16:10.592 "data_offset": 2048, 00:16:10.592 "data_size": 63488 00:16:10.592 }, 00:16:10.592 { 00:16:10.592 "name": null, 00:16:10.592 "uuid": "c7e45593-bea7-4deb-ac39-fd1c4419a6e7", 00:16:10.592 "is_configured": false, 00:16:10.592 "data_offset": 0, 00:16:10.592 "data_size": 63488 00:16:10.592 }, 00:16:10.592 { 00:16:10.592 "name": null, 00:16:10.592 "uuid": "f0842ea9-8ef5-4242-97a1-0702b09966f4", 00:16:10.592 "is_configured": false, 00:16:10.592 "data_offset": 0, 00:16:10.592 "data_size": 63488 00:16:10.592 }, 00:16:10.592 { 00:16:10.592 "name": "BaseBdev4", 00:16:10.592 "uuid": "33d76956-b949-4e54-b32a-f6543826a0b9", 00:16:10.592 "is_configured": true, 00:16:10.592 "data_offset": 2048, 00:16:10.592 "data_size": 63488 00:16:10.592 } 00:16:10.592 ] 00:16:10.592 }' 00:16:10.592 11:54:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.592 11:54:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.854 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.854 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.854 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.854 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:11.170 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.170 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:11.170 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:11.170 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.171 [2024-11-27 11:54:37.286547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.171 "name": "Existed_Raid", 00:16:11.171 "uuid": "8a7af690-cda8-4b28-a9f4-841f2f825be9", 00:16:11.171 "strip_size_kb": 64, 00:16:11.171 "state": "configuring", 00:16:11.171 "raid_level": "raid5f", 00:16:11.171 "superblock": true, 00:16:11.171 "num_base_bdevs": 4, 00:16:11.171 "num_base_bdevs_discovered": 3, 00:16:11.171 "num_base_bdevs_operational": 4, 00:16:11.171 "base_bdevs_list": [ 00:16:11.171 { 00:16:11.171 "name": "BaseBdev1", 00:16:11.171 "uuid": "714bb007-1912-47df-b738-b97879ea594a", 00:16:11.171 "is_configured": true, 00:16:11.171 "data_offset": 2048, 00:16:11.171 "data_size": 63488 00:16:11.171 }, 00:16:11.171 { 00:16:11.171 "name": null, 00:16:11.171 "uuid": "c7e45593-bea7-4deb-ac39-fd1c4419a6e7", 00:16:11.171 "is_configured": false, 00:16:11.171 "data_offset": 0, 00:16:11.171 "data_size": 63488 00:16:11.171 }, 00:16:11.171 { 00:16:11.171 "name": "BaseBdev3", 00:16:11.171 "uuid": "f0842ea9-8ef5-4242-97a1-0702b09966f4", 00:16:11.171 "is_configured": true, 00:16:11.171 "data_offset": 2048, 00:16:11.171 "data_size": 63488 00:16:11.171 }, 00:16:11.171 { 00:16:11.171 "name": "BaseBdev4", 00:16:11.171 "uuid": "33d76956-b949-4e54-b32a-f6543826a0b9", 00:16:11.171 "is_configured": true, 00:16:11.171 "data_offset": 2048, 00:16:11.171 "data_size": 63488 00:16:11.171 } 00:16:11.171 ] 00:16:11.171 }' 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.171 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.430 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.430 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:11.430 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.430 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.430 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.689 [2024-11-27 11:54:37.821849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.689 "name": "Existed_Raid", 00:16:11.689 "uuid": "8a7af690-cda8-4b28-a9f4-841f2f825be9", 00:16:11.689 "strip_size_kb": 64, 00:16:11.689 "state": "configuring", 00:16:11.689 "raid_level": "raid5f", 00:16:11.689 "superblock": true, 00:16:11.689 "num_base_bdevs": 4, 00:16:11.689 "num_base_bdevs_discovered": 2, 00:16:11.689 "num_base_bdevs_operational": 4, 00:16:11.689 "base_bdevs_list": [ 00:16:11.689 { 00:16:11.689 "name": null, 00:16:11.689 "uuid": "714bb007-1912-47df-b738-b97879ea594a", 00:16:11.689 "is_configured": false, 00:16:11.689 "data_offset": 0, 00:16:11.689 "data_size": 63488 00:16:11.689 }, 00:16:11.689 { 00:16:11.689 "name": null, 00:16:11.689 "uuid": "c7e45593-bea7-4deb-ac39-fd1c4419a6e7", 00:16:11.689 "is_configured": false, 00:16:11.689 "data_offset": 0, 00:16:11.689 "data_size": 63488 00:16:11.689 }, 00:16:11.689 { 00:16:11.689 "name": "BaseBdev3", 00:16:11.689 "uuid": "f0842ea9-8ef5-4242-97a1-0702b09966f4", 00:16:11.689 "is_configured": true, 00:16:11.689 "data_offset": 2048, 00:16:11.689 "data_size": 63488 00:16:11.689 }, 00:16:11.689 { 00:16:11.689 "name": "BaseBdev4", 00:16:11.689 "uuid": "33d76956-b949-4e54-b32a-f6543826a0b9", 00:16:11.689 "is_configured": true, 00:16:11.689 "data_offset": 2048, 00:16:11.689 "data_size": 63488 00:16:11.689 } 00:16:11.689 ] 00:16:11.689 }' 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.689 11:54:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.255 [2024-11-27 11:54:38.462116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.255 "name": "Existed_Raid", 00:16:12.255 "uuid": "8a7af690-cda8-4b28-a9f4-841f2f825be9", 00:16:12.255 "strip_size_kb": 64, 00:16:12.255 "state": "configuring", 00:16:12.255 "raid_level": "raid5f", 00:16:12.255 "superblock": true, 00:16:12.255 "num_base_bdevs": 4, 00:16:12.255 "num_base_bdevs_discovered": 3, 00:16:12.255 "num_base_bdevs_operational": 4, 00:16:12.255 "base_bdevs_list": [ 00:16:12.255 { 00:16:12.255 "name": null, 00:16:12.255 "uuid": "714bb007-1912-47df-b738-b97879ea594a", 00:16:12.255 "is_configured": false, 00:16:12.255 "data_offset": 0, 00:16:12.255 "data_size": 63488 00:16:12.255 }, 00:16:12.255 { 00:16:12.255 "name": "BaseBdev2", 00:16:12.255 "uuid": "c7e45593-bea7-4deb-ac39-fd1c4419a6e7", 00:16:12.255 "is_configured": true, 00:16:12.255 "data_offset": 2048, 00:16:12.255 "data_size": 63488 00:16:12.255 }, 00:16:12.255 { 00:16:12.255 "name": "BaseBdev3", 00:16:12.255 "uuid": "f0842ea9-8ef5-4242-97a1-0702b09966f4", 00:16:12.255 "is_configured": true, 00:16:12.255 "data_offset": 2048, 00:16:12.255 "data_size": 63488 00:16:12.255 }, 00:16:12.255 { 00:16:12.255 "name": "BaseBdev4", 00:16:12.255 "uuid": "33d76956-b949-4e54-b32a-f6543826a0b9", 00:16:12.255 "is_configured": true, 00:16:12.255 "data_offset": 2048, 00:16:12.255 "data_size": 63488 00:16:12.255 } 00:16:12.255 ] 00:16:12.255 }' 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.255 11:54:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.822 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.822 11:54:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.822 11:54:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:12.822 11:54:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.822 11:54:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 714bb007-1912-47df-b738-b97879ea594a 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.822 [2024-11-27 11:54:39.100333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:12.822 [2024-11-27 11:54:39.100609] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:12.822 [2024-11-27 11:54:39.100623] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:12.822 [2024-11-27 11:54:39.100953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:12.822 NewBaseBdev 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.822 [2024-11-27 11:54:39.109553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:12.822 [2024-11-27 11:54:39.109582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:12.822 [2024-11-27 11:54:39.109911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.822 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.823 [ 00:16:12.823 { 00:16:12.823 "name": "NewBaseBdev", 00:16:12.823 "aliases": [ 00:16:12.823 "714bb007-1912-47df-b738-b97879ea594a" 00:16:12.823 ], 00:16:12.823 "product_name": "Malloc disk", 00:16:12.823 "block_size": 512, 00:16:12.823 "num_blocks": 65536, 00:16:12.823 "uuid": "714bb007-1912-47df-b738-b97879ea594a", 00:16:12.823 "assigned_rate_limits": { 00:16:12.823 "rw_ios_per_sec": 0, 00:16:12.823 "rw_mbytes_per_sec": 0, 00:16:12.823 "r_mbytes_per_sec": 0, 00:16:12.823 "w_mbytes_per_sec": 0 00:16:12.823 }, 00:16:12.823 "claimed": true, 00:16:12.823 "claim_type": "exclusive_write", 00:16:12.823 "zoned": false, 00:16:12.823 "supported_io_types": { 00:16:12.823 "read": true, 00:16:12.823 "write": true, 00:16:12.823 "unmap": true, 00:16:12.823 "flush": true, 00:16:12.823 "reset": true, 00:16:12.823 "nvme_admin": false, 00:16:12.823 "nvme_io": false, 00:16:12.823 "nvme_io_md": false, 00:16:12.823 "write_zeroes": true, 00:16:12.823 "zcopy": true, 00:16:12.823 "get_zone_info": false, 00:16:12.823 "zone_management": false, 00:16:12.823 "zone_append": false, 00:16:12.823 "compare": false, 00:16:12.823 "compare_and_write": false, 00:16:12.823 "abort": true, 00:16:12.823 "seek_hole": false, 00:16:12.823 "seek_data": false, 00:16:12.823 "copy": true, 00:16:12.823 "nvme_iov_md": false 00:16:12.823 }, 00:16:12.823 "memory_domains": [ 00:16:12.823 { 00:16:12.823 "dma_device_id": "system", 00:16:12.823 "dma_device_type": 1 00:16:12.823 }, 00:16:12.823 { 00:16:12.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.823 "dma_device_type": 2 00:16:12.823 } 00:16:12.823 ], 00:16:12.823 "driver_specific": {} 00:16:12.823 } 00:16:12.823 ] 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.823 "name": "Existed_Raid", 00:16:12.823 "uuid": "8a7af690-cda8-4b28-a9f4-841f2f825be9", 00:16:12.823 "strip_size_kb": 64, 00:16:12.823 "state": "online", 00:16:12.823 "raid_level": "raid5f", 00:16:12.823 "superblock": true, 00:16:12.823 "num_base_bdevs": 4, 00:16:12.823 "num_base_bdevs_discovered": 4, 00:16:12.823 "num_base_bdevs_operational": 4, 00:16:12.823 "base_bdevs_list": [ 00:16:12.823 { 00:16:12.823 "name": "NewBaseBdev", 00:16:12.823 "uuid": "714bb007-1912-47df-b738-b97879ea594a", 00:16:12.823 "is_configured": true, 00:16:12.823 "data_offset": 2048, 00:16:12.823 "data_size": 63488 00:16:12.823 }, 00:16:12.823 { 00:16:12.823 "name": "BaseBdev2", 00:16:12.823 "uuid": "c7e45593-bea7-4deb-ac39-fd1c4419a6e7", 00:16:12.823 "is_configured": true, 00:16:12.823 "data_offset": 2048, 00:16:12.823 "data_size": 63488 00:16:12.823 }, 00:16:12.823 { 00:16:12.823 "name": "BaseBdev3", 00:16:12.823 "uuid": "f0842ea9-8ef5-4242-97a1-0702b09966f4", 00:16:12.823 "is_configured": true, 00:16:12.823 "data_offset": 2048, 00:16:12.823 "data_size": 63488 00:16:12.823 }, 00:16:12.823 { 00:16:12.823 "name": "BaseBdev4", 00:16:12.823 "uuid": "33d76956-b949-4e54-b32a-f6543826a0b9", 00:16:12.823 "is_configured": true, 00:16:12.823 "data_offset": 2048, 00:16:12.823 "data_size": 63488 00:16:12.823 } 00:16:12.823 ] 00:16:12.823 }' 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.823 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:13.390 [2024-11-27 11:54:39.651566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:13.390 "name": "Existed_Raid", 00:16:13.390 "aliases": [ 00:16:13.390 "8a7af690-cda8-4b28-a9f4-841f2f825be9" 00:16:13.390 ], 00:16:13.390 "product_name": "Raid Volume", 00:16:13.390 "block_size": 512, 00:16:13.390 "num_blocks": 190464, 00:16:13.390 "uuid": "8a7af690-cda8-4b28-a9f4-841f2f825be9", 00:16:13.390 "assigned_rate_limits": { 00:16:13.390 "rw_ios_per_sec": 0, 00:16:13.390 "rw_mbytes_per_sec": 0, 00:16:13.390 "r_mbytes_per_sec": 0, 00:16:13.390 "w_mbytes_per_sec": 0 00:16:13.390 }, 00:16:13.390 "claimed": false, 00:16:13.390 "zoned": false, 00:16:13.390 "supported_io_types": { 00:16:13.390 "read": true, 00:16:13.390 "write": true, 00:16:13.390 "unmap": false, 00:16:13.390 "flush": false, 00:16:13.390 "reset": true, 00:16:13.390 "nvme_admin": false, 00:16:13.390 "nvme_io": false, 00:16:13.390 "nvme_io_md": false, 00:16:13.390 "write_zeroes": true, 00:16:13.390 "zcopy": false, 00:16:13.390 "get_zone_info": false, 00:16:13.390 "zone_management": false, 00:16:13.390 "zone_append": false, 00:16:13.390 "compare": false, 00:16:13.390 "compare_and_write": false, 00:16:13.390 "abort": false, 00:16:13.390 "seek_hole": false, 00:16:13.390 "seek_data": false, 00:16:13.390 "copy": false, 00:16:13.390 "nvme_iov_md": false 00:16:13.390 }, 00:16:13.390 "driver_specific": { 00:16:13.390 "raid": { 00:16:13.390 "uuid": "8a7af690-cda8-4b28-a9f4-841f2f825be9", 00:16:13.390 "strip_size_kb": 64, 00:16:13.390 "state": "online", 00:16:13.390 "raid_level": "raid5f", 00:16:13.390 "superblock": true, 00:16:13.390 "num_base_bdevs": 4, 00:16:13.390 "num_base_bdevs_discovered": 4, 00:16:13.390 "num_base_bdevs_operational": 4, 00:16:13.390 "base_bdevs_list": [ 00:16:13.390 { 00:16:13.390 "name": "NewBaseBdev", 00:16:13.390 "uuid": "714bb007-1912-47df-b738-b97879ea594a", 00:16:13.390 "is_configured": true, 00:16:13.390 "data_offset": 2048, 00:16:13.390 "data_size": 63488 00:16:13.390 }, 00:16:13.390 { 00:16:13.390 "name": "BaseBdev2", 00:16:13.390 "uuid": "c7e45593-bea7-4deb-ac39-fd1c4419a6e7", 00:16:13.390 "is_configured": true, 00:16:13.390 "data_offset": 2048, 00:16:13.390 "data_size": 63488 00:16:13.390 }, 00:16:13.390 { 00:16:13.390 "name": "BaseBdev3", 00:16:13.390 "uuid": "f0842ea9-8ef5-4242-97a1-0702b09966f4", 00:16:13.390 "is_configured": true, 00:16:13.390 "data_offset": 2048, 00:16:13.390 "data_size": 63488 00:16:13.390 }, 00:16:13.390 { 00:16:13.390 "name": "BaseBdev4", 00:16:13.390 "uuid": "33d76956-b949-4e54-b32a-f6543826a0b9", 00:16:13.390 "is_configured": true, 00:16:13.390 "data_offset": 2048, 00:16:13.390 "data_size": 63488 00:16:13.390 } 00:16:13.390 ] 00:16:13.390 } 00:16:13.390 } 00:16:13.390 }' 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:13.390 BaseBdev2 00:16:13.390 BaseBdev3 00:16:13.390 BaseBdev4' 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.390 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.649 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.650 [2024-11-27 11:54:39.982758] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:13.650 [2024-11-27 11:54:39.982855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.650 [2024-11-27 11:54:39.982986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.650 [2024-11-27 11:54:39.983372] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.650 [2024-11-27 11:54:39.983440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83516 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83516 ']' 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83516 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.650 11:54:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83516 00:16:13.650 11:54:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.650 11:54:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.650 11:54:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83516' 00:16:13.650 killing process with pid 83516 00:16:13.650 11:54:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83516 00:16:13.650 [2024-11-27 11:54:40.018214] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:13.650 11:54:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83516 00:16:14.217 [2024-11-27 11:54:40.495715] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.596 ************************************ 00:16:15.596 END TEST raid5f_state_function_test_sb 00:16:15.596 ************************************ 00:16:15.596 11:54:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:15.596 00:16:15.596 real 0m12.503s 00:16:15.596 user 0m19.834s 00:16:15.596 sys 0m2.182s 00:16:15.596 11:54:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.596 11:54:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.596 11:54:41 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:15.596 11:54:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:15.596 11:54:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.596 11:54:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.596 ************************************ 00:16:15.596 START TEST raid5f_superblock_test 00:16:15.596 ************************************ 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84191 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84191 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84191 ']' 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.596 11:54:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.596 [2024-11-27 11:54:41.843527] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:16:15.596 [2024-11-27 11:54:41.843747] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84191 ] 00:16:15.855 [2024-11-27 11:54:42.017184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.855 [2024-11-27 11:54:42.132583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.115 [2024-11-27 11:54:42.333699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.115 [2024-11-27 11:54:42.333845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.375 malloc1 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.375 [2024-11-27 11:54:42.726356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:16.375 [2024-11-27 11:54:42.726457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.375 [2024-11-27 11:54:42.726484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:16.375 [2024-11-27 11:54:42.726494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.375 [2024-11-27 11:54:42.728580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.375 [2024-11-27 11:54:42.728618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:16.375 pt1 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.375 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.637 malloc2 00:16:16.637 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.637 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:16.637 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.637 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.638 [2024-11-27 11:54:42.782090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:16.638 [2024-11-27 11:54:42.782208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.638 [2024-11-27 11:54:42.782273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:16.638 [2024-11-27 11:54:42.782315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.638 [2024-11-27 11:54:42.784959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.638 [2024-11-27 11:54:42.785036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:16.638 pt2 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.638 malloc3 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.638 [2024-11-27 11:54:42.852107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:16.638 [2024-11-27 11:54:42.852227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.638 [2024-11-27 11:54:42.852267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:16.638 [2024-11-27 11:54:42.852294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.638 [2024-11-27 11:54:42.854429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.638 [2024-11-27 11:54:42.854509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:16.638 pt3 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.638 malloc4 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.638 [2024-11-27 11:54:42.912024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:16.638 [2024-11-27 11:54:42.912119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.638 [2024-11-27 11:54:42.912158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:16.638 [2024-11-27 11:54:42.912185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.638 [2024-11-27 11:54:42.914210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.638 [2024-11-27 11:54:42.914287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:16.638 pt4 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.638 [2024-11-27 11:54:42.924032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:16.638 [2024-11-27 11:54:42.925823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:16.638 [2024-11-27 11:54:42.925975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:16.638 [2024-11-27 11:54:42.926060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:16.638 [2024-11-27 11:54:42.926289] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:16.638 [2024-11-27 11:54:42.926338] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:16.638 [2024-11-27 11:54:42.926600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:16.638 [2024-11-27 11:54:42.934014] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:16.638 [2024-11-27 11:54:42.934070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:16.638 [2024-11-27 11:54:42.934287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.638 "name": "raid_bdev1", 00:16:16.638 "uuid": "2f373570-f334-4b10-82ee-098e727b2fcd", 00:16:16.638 "strip_size_kb": 64, 00:16:16.638 "state": "online", 00:16:16.638 "raid_level": "raid5f", 00:16:16.638 "superblock": true, 00:16:16.638 "num_base_bdevs": 4, 00:16:16.638 "num_base_bdevs_discovered": 4, 00:16:16.638 "num_base_bdevs_operational": 4, 00:16:16.638 "base_bdevs_list": [ 00:16:16.638 { 00:16:16.638 "name": "pt1", 00:16:16.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:16.638 "is_configured": true, 00:16:16.638 "data_offset": 2048, 00:16:16.638 "data_size": 63488 00:16:16.638 }, 00:16:16.638 { 00:16:16.638 "name": "pt2", 00:16:16.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.638 "is_configured": true, 00:16:16.638 "data_offset": 2048, 00:16:16.638 "data_size": 63488 00:16:16.638 }, 00:16:16.638 { 00:16:16.638 "name": "pt3", 00:16:16.638 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:16.638 "is_configured": true, 00:16:16.638 "data_offset": 2048, 00:16:16.638 "data_size": 63488 00:16:16.638 }, 00:16:16.638 { 00:16:16.638 "name": "pt4", 00:16:16.638 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:16.638 "is_configured": true, 00:16:16.638 "data_offset": 2048, 00:16:16.638 "data_size": 63488 00:16:16.638 } 00:16:16.638 ] 00:16:16.638 }' 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.638 11:54:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.210 [2024-11-27 11:54:43.438015] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:17.210 "name": "raid_bdev1", 00:16:17.210 "aliases": [ 00:16:17.210 "2f373570-f334-4b10-82ee-098e727b2fcd" 00:16:17.210 ], 00:16:17.210 "product_name": "Raid Volume", 00:16:17.210 "block_size": 512, 00:16:17.210 "num_blocks": 190464, 00:16:17.210 "uuid": "2f373570-f334-4b10-82ee-098e727b2fcd", 00:16:17.210 "assigned_rate_limits": { 00:16:17.210 "rw_ios_per_sec": 0, 00:16:17.210 "rw_mbytes_per_sec": 0, 00:16:17.210 "r_mbytes_per_sec": 0, 00:16:17.210 "w_mbytes_per_sec": 0 00:16:17.210 }, 00:16:17.210 "claimed": false, 00:16:17.210 "zoned": false, 00:16:17.210 "supported_io_types": { 00:16:17.210 "read": true, 00:16:17.210 "write": true, 00:16:17.210 "unmap": false, 00:16:17.210 "flush": false, 00:16:17.210 "reset": true, 00:16:17.210 "nvme_admin": false, 00:16:17.210 "nvme_io": false, 00:16:17.210 "nvme_io_md": false, 00:16:17.210 "write_zeroes": true, 00:16:17.210 "zcopy": false, 00:16:17.210 "get_zone_info": false, 00:16:17.210 "zone_management": false, 00:16:17.210 "zone_append": false, 00:16:17.210 "compare": false, 00:16:17.210 "compare_and_write": false, 00:16:17.210 "abort": false, 00:16:17.210 "seek_hole": false, 00:16:17.210 "seek_data": false, 00:16:17.210 "copy": false, 00:16:17.210 "nvme_iov_md": false 00:16:17.210 }, 00:16:17.210 "driver_specific": { 00:16:17.210 "raid": { 00:16:17.210 "uuid": "2f373570-f334-4b10-82ee-098e727b2fcd", 00:16:17.210 "strip_size_kb": 64, 00:16:17.210 "state": "online", 00:16:17.210 "raid_level": "raid5f", 00:16:17.210 "superblock": true, 00:16:17.210 "num_base_bdevs": 4, 00:16:17.210 "num_base_bdevs_discovered": 4, 00:16:17.210 "num_base_bdevs_operational": 4, 00:16:17.210 "base_bdevs_list": [ 00:16:17.210 { 00:16:17.210 "name": "pt1", 00:16:17.210 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.210 "is_configured": true, 00:16:17.210 "data_offset": 2048, 00:16:17.210 "data_size": 63488 00:16:17.210 }, 00:16:17.210 { 00:16:17.210 "name": "pt2", 00:16:17.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.210 "is_configured": true, 00:16:17.210 "data_offset": 2048, 00:16:17.210 "data_size": 63488 00:16:17.210 }, 00:16:17.210 { 00:16:17.210 "name": "pt3", 00:16:17.210 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.210 "is_configured": true, 00:16:17.210 "data_offset": 2048, 00:16:17.210 "data_size": 63488 00:16:17.210 }, 00:16:17.210 { 00:16:17.210 "name": "pt4", 00:16:17.210 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:17.210 "is_configured": true, 00:16:17.210 "data_offset": 2048, 00:16:17.210 "data_size": 63488 00:16:17.210 } 00:16:17.210 ] 00:16:17.210 } 00:16:17.210 } 00:16:17.210 }' 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:17.210 pt2 00:16:17.210 pt3 00:16:17.210 pt4' 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.210 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:17.469 [2024-11-27 11:54:43.741476] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2f373570-f334-4b10-82ee-098e727b2fcd 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2f373570-f334-4b10-82ee-098e727b2fcd ']' 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.469 [2024-11-27 11:54:43.793203] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:17.469 [2024-11-27 11:54:43.793240] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:17.469 [2024-11-27 11:54:43.793342] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.469 [2024-11-27 11:54:43.793434] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:17.469 [2024-11-27 11:54:43.793462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:17.469 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.470 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.729 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.729 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:17.729 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.730 [2024-11-27 11:54:43.960937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:17.730 [2024-11-27 11:54:43.962877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:17.730 [2024-11-27 11:54:43.962926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:17.730 [2024-11-27 11:54:43.962960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:17.730 [2024-11-27 11:54:43.963009] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:17.730 [2024-11-27 11:54:43.963055] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:17.730 [2024-11-27 11:54:43.963074] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:17.730 [2024-11-27 11:54:43.963092] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:17.730 [2024-11-27 11:54:43.963106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:17.730 [2024-11-27 11:54:43.963116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:17.730 request: 00:16:17.730 { 00:16:17.730 "name": "raid_bdev1", 00:16:17.730 "raid_level": "raid5f", 00:16:17.730 "base_bdevs": [ 00:16:17.730 "malloc1", 00:16:17.730 "malloc2", 00:16:17.730 "malloc3", 00:16:17.730 "malloc4" 00:16:17.730 ], 00:16:17.730 "strip_size_kb": 64, 00:16:17.730 "superblock": false, 00:16:17.730 "method": "bdev_raid_create", 00:16:17.730 "req_id": 1 00:16:17.730 } 00:16:17.730 Got JSON-RPC error response 00:16:17.730 response: 00:16:17.730 { 00:16:17.730 "code": -17, 00:16:17.730 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:17.730 } 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:17.730 11:54:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.730 [2024-11-27 11:54:44.028755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:17.730 [2024-11-27 11:54:44.028883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.730 [2024-11-27 11:54:44.028927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:17.730 [2024-11-27 11:54:44.028980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.730 [2024-11-27 11:54:44.031394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.730 [2024-11-27 11:54:44.031471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:17.730 [2024-11-27 11:54:44.031586] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:17.730 [2024-11-27 11:54:44.031686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:17.730 pt1 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.730 "name": "raid_bdev1", 00:16:17.730 "uuid": "2f373570-f334-4b10-82ee-098e727b2fcd", 00:16:17.730 "strip_size_kb": 64, 00:16:17.730 "state": "configuring", 00:16:17.730 "raid_level": "raid5f", 00:16:17.730 "superblock": true, 00:16:17.730 "num_base_bdevs": 4, 00:16:17.730 "num_base_bdevs_discovered": 1, 00:16:17.730 "num_base_bdevs_operational": 4, 00:16:17.730 "base_bdevs_list": [ 00:16:17.730 { 00:16:17.730 "name": "pt1", 00:16:17.730 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.730 "is_configured": true, 00:16:17.730 "data_offset": 2048, 00:16:17.730 "data_size": 63488 00:16:17.730 }, 00:16:17.730 { 00:16:17.730 "name": null, 00:16:17.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.730 "is_configured": false, 00:16:17.730 "data_offset": 2048, 00:16:17.730 "data_size": 63488 00:16:17.730 }, 00:16:17.730 { 00:16:17.730 "name": null, 00:16:17.730 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.730 "is_configured": false, 00:16:17.730 "data_offset": 2048, 00:16:17.730 "data_size": 63488 00:16:17.730 }, 00:16:17.730 { 00:16:17.730 "name": null, 00:16:17.730 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:17.730 "is_configured": false, 00:16:17.730 "data_offset": 2048, 00:16:17.730 "data_size": 63488 00:16:17.730 } 00:16:17.730 ] 00:16:17.730 }' 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.730 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.299 [2024-11-27 11:54:44.500005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:18.299 [2024-11-27 11:54:44.500084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.299 [2024-11-27 11:54:44.500107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:18.299 [2024-11-27 11:54:44.500118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.299 [2024-11-27 11:54:44.500561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.299 [2024-11-27 11:54:44.500580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:18.299 [2024-11-27 11:54:44.500661] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:18.299 [2024-11-27 11:54:44.500687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.299 pt2 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.299 [2024-11-27 11:54:44.511978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.299 "name": "raid_bdev1", 00:16:18.299 "uuid": "2f373570-f334-4b10-82ee-098e727b2fcd", 00:16:18.299 "strip_size_kb": 64, 00:16:18.299 "state": "configuring", 00:16:18.299 "raid_level": "raid5f", 00:16:18.299 "superblock": true, 00:16:18.299 "num_base_bdevs": 4, 00:16:18.299 "num_base_bdevs_discovered": 1, 00:16:18.299 "num_base_bdevs_operational": 4, 00:16:18.299 "base_bdevs_list": [ 00:16:18.299 { 00:16:18.299 "name": "pt1", 00:16:18.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:18.299 "is_configured": true, 00:16:18.299 "data_offset": 2048, 00:16:18.299 "data_size": 63488 00:16:18.299 }, 00:16:18.299 { 00:16:18.299 "name": null, 00:16:18.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.299 "is_configured": false, 00:16:18.299 "data_offset": 0, 00:16:18.299 "data_size": 63488 00:16:18.299 }, 00:16:18.299 { 00:16:18.299 "name": null, 00:16:18.299 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:18.299 "is_configured": false, 00:16:18.299 "data_offset": 2048, 00:16:18.299 "data_size": 63488 00:16:18.299 }, 00:16:18.299 { 00:16:18.299 "name": null, 00:16:18.299 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:18.299 "is_configured": false, 00:16:18.299 "data_offset": 2048, 00:16:18.299 "data_size": 63488 00:16:18.299 } 00:16:18.299 ] 00:16:18.299 }' 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.299 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.559 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:18.559 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:18.559 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:18.559 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.559 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.819 [2024-11-27 11:54:44.943219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:18.819 [2024-11-27 11:54:44.943336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.819 [2024-11-27 11:54:44.943386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:18.819 [2024-11-27 11:54:44.943420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.819 [2024-11-27 11:54:44.943906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.819 [2024-11-27 11:54:44.943973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:18.819 [2024-11-27 11:54:44.944085] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:18.819 [2024-11-27 11:54:44.944135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.819 pt2 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.819 [2024-11-27 11:54:44.955157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:18.819 [2024-11-27 11:54:44.955234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.819 [2024-11-27 11:54:44.955274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:18.819 [2024-11-27 11:54:44.955303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.819 [2024-11-27 11:54:44.955676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.819 [2024-11-27 11:54:44.955729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:18.819 [2024-11-27 11:54:44.955812] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:18.819 [2024-11-27 11:54:44.955881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:18.819 pt3 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.819 [2024-11-27 11:54:44.967109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:18.819 [2024-11-27 11:54:44.967148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.819 [2024-11-27 11:54:44.967162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:18.819 [2024-11-27 11:54:44.967169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.819 [2024-11-27 11:54:44.967499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.819 [2024-11-27 11:54:44.967514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:18.819 [2024-11-27 11:54:44.967566] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:18.819 [2024-11-27 11:54:44.967584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:18.819 [2024-11-27 11:54:44.967717] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:18.819 [2024-11-27 11:54:44.967725] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:18.819 [2024-11-27 11:54:44.967980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:18.819 [2024-11-27 11:54:44.974774] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:18.819 [2024-11-27 11:54:44.974797] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:18.819 [2024-11-27 11:54:44.974990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.819 pt4 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.819 11:54:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.819 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.819 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.819 "name": "raid_bdev1", 00:16:18.819 "uuid": "2f373570-f334-4b10-82ee-098e727b2fcd", 00:16:18.819 "strip_size_kb": 64, 00:16:18.819 "state": "online", 00:16:18.819 "raid_level": "raid5f", 00:16:18.819 "superblock": true, 00:16:18.819 "num_base_bdevs": 4, 00:16:18.819 "num_base_bdevs_discovered": 4, 00:16:18.819 "num_base_bdevs_operational": 4, 00:16:18.819 "base_bdevs_list": [ 00:16:18.819 { 00:16:18.819 "name": "pt1", 00:16:18.819 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:18.819 "is_configured": true, 00:16:18.819 "data_offset": 2048, 00:16:18.819 "data_size": 63488 00:16:18.819 }, 00:16:18.819 { 00:16:18.819 "name": "pt2", 00:16:18.819 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.819 "is_configured": true, 00:16:18.819 "data_offset": 2048, 00:16:18.819 "data_size": 63488 00:16:18.819 }, 00:16:18.819 { 00:16:18.819 "name": "pt3", 00:16:18.819 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:18.819 "is_configured": true, 00:16:18.819 "data_offset": 2048, 00:16:18.819 "data_size": 63488 00:16:18.819 }, 00:16:18.819 { 00:16:18.819 "name": "pt4", 00:16:18.819 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:18.819 "is_configured": true, 00:16:18.819 "data_offset": 2048, 00:16:18.819 "data_size": 63488 00:16:18.819 } 00:16:18.819 ] 00:16:18.819 }' 00:16:18.819 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.819 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.078 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:19.078 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:19.078 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:19.078 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:19.078 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:19.078 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:19.078 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:19.078 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:19.078 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.078 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.078 [2024-11-27 11:54:45.391083] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.078 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.078 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:19.078 "name": "raid_bdev1", 00:16:19.078 "aliases": [ 00:16:19.078 "2f373570-f334-4b10-82ee-098e727b2fcd" 00:16:19.078 ], 00:16:19.078 "product_name": "Raid Volume", 00:16:19.078 "block_size": 512, 00:16:19.078 "num_blocks": 190464, 00:16:19.078 "uuid": "2f373570-f334-4b10-82ee-098e727b2fcd", 00:16:19.078 "assigned_rate_limits": { 00:16:19.078 "rw_ios_per_sec": 0, 00:16:19.078 "rw_mbytes_per_sec": 0, 00:16:19.078 "r_mbytes_per_sec": 0, 00:16:19.078 "w_mbytes_per_sec": 0 00:16:19.078 }, 00:16:19.079 "claimed": false, 00:16:19.079 "zoned": false, 00:16:19.079 "supported_io_types": { 00:16:19.079 "read": true, 00:16:19.079 "write": true, 00:16:19.079 "unmap": false, 00:16:19.079 "flush": false, 00:16:19.079 "reset": true, 00:16:19.079 "nvme_admin": false, 00:16:19.079 "nvme_io": false, 00:16:19.079 "nvme_io_md": false, 00:16:19.079 "write_zeroes": true, 00:16:19.079 "zcopy": false, 00:16:19.079 "get_zone_info": false, 00:16:19.079 "zone_management": false, 00:16:19.079 "zone_append": false, 00:16:19.079 "compare": false, 00:16:19.079 "compare_and_write": false, 00:16:19.079 "abort": false, 00:16:19.079 "seek_hole": false, 00:16:19.079 "seek_data": false, 00:16:19.079 "copy": false, 00:16:19.079 "nvme_iov_md": false 00:16:19.079 }, 00:16:19.079 "driver_specific": { 00:16:19.079 "raid": { 00:16:19.079 "uuid": "2f373570-f334-4b10-82ee-098e727b2fcd", 00:16:19.079 "strip_size_kb": 64, 00:16:19.079 "state": "online", 00:16:19.079 "raid_level": "raid5f", 00:16:19.079 "superblock": true, 00:16:19.079 "num_base_bdevs": 4, 00:16:19.079 "num_base_bdevs_discovered": 4, 00:16:19.079 "num_base_bdevs_operational": 4, 00:16:19.079 "base_bdevs_list": [ 00:16:19.079 { 00:16:19.079 "name": "pt1", 00:16:19.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:19.079 "is_configured": true, 00:16:19.079 "data_offset": 2048, 00:16:19.079 "data_size": 63488 00:16:19.079 }, 00:16:19.079 { 00:16:19.079 "name": "pt2", 00:16:19.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.079 "is_configured": true, 00:16:19.079 "data_offset": 2048, 00:16:19.079 "data_size": 63488 00:16:19.079 }, 00:16:19.079 { 00:16:19.079 "name": "pt3", 00:16:19.079 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:19.079 "is_configured": true, 00:16:19.079 "data_offset": 2048, 00:16:19.079 "data_size": 63488 00:16:19.079 }, 00:16:19.079 { 00:16:19.079 "name": "pt4", 00:16:19.079 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:19.079 "is_configured": true, 00:16:19.079 "data_offset": 2048, 00:16:19.079 "data_size": 63488 00:16:19.079 } 00:16:19.079 ] 00:16:19.079 } 00:16:19.079 } 00:16:19.079 }' 00:16:19.079 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:19.338 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:19.338 pt2 00:16:19.338 pt3 00:16:19.338 pt4' 00:16:19.338 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.338 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:19.338 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.338 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.338 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:19.338 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.338 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.338 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.339 [2024-11-27 11:54:45.670481] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2f373570-f334-4b10-82ee-098e727b2fcd '!=' 2f373570-f334-4b10-82ee-098e727b2fcd ']' 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.339 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.339 [2024-11-27 11:54:45.714287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.599 "name": "raid_bdev1", 00:16:19.599 "uuid": "2f373570-f334-4b10-82ee-098e727b2fcd", 00:16:19.599 "strip_size_kb": 64, 00:16:19.599 "state": "online", 00:16:19.599 "raid_level": "raid5f", 00:16:19.599 "superblock": true, 00:16:19.599 "num_base_bdevs": 4, 00:16:19.599 "num_base_bdevs_discovered": 3, 00:16:19.599 "num_base_bdevs_operational": 3, 00:16:19.599 "base_bdevs_list": [ 00:16:19.599 { 00:16:19.599 "name": null, 00:16:19.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.599 "is_configured": false, 00:16:19.599 "data_offset": 0, 00:16:19.599 "data_size": 63488 00:16:19.599 }, 00:16:19.599 { 00:16:19.599 "name": "pt2", 00:16:19.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.599 "is_configured": true, 00:16:19.599 "data_offset": 2048, 00:16:19.599 "data_size": 63488 00:16:19.599 }, 00:16:19.599 { 00:16:19.599 "name": "pt3", 00:16:19.599 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:19.599 "is_configured": true, 00:16:19.599 "data_offset": 2048, 00:16:19.599 "data_size": 63488 00:16:19.599 }, 00:16:19.599 { 00:16:19.599 "name": "pt4", 00:16:19.599 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:19.599 "is_configured": true, 00:16:19.599 "data_offset": 2048, 00:16:19.599 "data_size": 63488 00:16:19.599 } 00:16:19.599 ] 00:16:19.599 }' 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.599 11:54:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.859 [2024-11-27 11:54:46.113575] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.859 [2024-11-27 11:54:46.113654] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.859 [2024-11-27 11:54:46.113764] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.859 [2024-11-27 11:54:46.113876] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.859 [2024-11-27 11:54:46.113923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.859 [2024-11-27 11:54:46.197419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:19.859 [2024-11-27 11:54:46.197470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.859 [2024-11-27 11:54:46.197488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:19.859 [2024-11-27 11:54:46.197497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.859 [2024-11-27 11:54:46.199681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.859 [2024-11-27 11:54:46.199764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:19.859 [2024-11-27 11:54:46.199933] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:19.859 [2024-11-27 11:54:46.199990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:19.859 pt2 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.859 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.119 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.119 "name": "raid_bdev1", 00:16:20.119 "uuid": "2f373570-f334-4b10-82ee-098e727b2fcd", 00:16:20.119 "strip_size_kb": 64, 00:16:20.119 "state": "configuring", 00:16:20.119 "raid_level": "raid5f", 00:16:20.119 "superblock": true, 00:16:20.119 "num_base_bdevs": 4, 00:16:20.119 "num_base_bdevs_discovered": 1, 00:16:20.119 "num_base_bdevs_operational": 3, 00:16:20.119 "base_bdevs_list": [ 00:16:20.119 { 00:16:20.119 "name": null, 00:16:20.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.119 "is_configured": false, 00:16:20.119 "data_offset": 2048, 00:16:20.119 "data_size": 63488 00:16:20.119 }, 00:16:20.119 { 00:16:20.119 "name": "pt2", 00:16:20.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.119 "is_configured": true, 00:16:20.119 "data_offset": 2048, 00:16:20.119 "data_size": 63488 00:16:20.119 }, 00:16:20.119 { 00:16:20.119 "name": null, 00:16:20.119 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:20.119 "is_configured": false, 00:16:20.119 "data_offset": 2048, 00:16:20.119 "data_size": 63488 00:16:20.119 }, 00:16:20.119 { 00:16:20.119 "name": null, 00:16:20.119 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:20.119 "is_configured": false, 00:16:20.119 "data_offset": 2048, 00:16:20.119 "data_size": 63488 00:16:20.119 } 00:16:20.119 ] 00:16:20.119 }' 00:16:20.119 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.119 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.379 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:20.379 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:20.379 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:20.379 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.379 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.379 [2024-11-27 11:54:46.656704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:20.379 [2024-11-27 11:54:46.656847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.379 [2024-11-27 11:54:46.656906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:20.379 [2024-11-27 11:54:46.656938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.379 [2024-11-27 11:54:46.657424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.379 [2024-11-27 11:54:46.657484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:20.379 [2024-11-27 11:54:46.657599] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:20.379 [2024-11-27 11:54:46.657647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:20.379 pt3 00:16:20.379 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.379 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:20.379 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.379 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.379 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.379 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.379 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.379 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.379 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.380 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.380 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.380 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.380 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.380 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.380 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.380 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.380 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.380 "name": "raid_bdev1", 00:16:20.380 "uuid": "2f373570-f334-4b10-82ee-098e727b2fcd", 00:16:20.380 "strip_size_kb": 64, 00:16:20.380 "state": "configuring", 00:16:20.380 "raid_level": "raid5f", 00:16:20.380 "superblock": true, 00:16:20.380 "num_base_bdevs": 4, 00:16:20.380 "num_base_bdevs_discovered": 2, 00:16:20.380 "num_base_bdevs_operational": 3, 00:16:20.380 "base_bdevs_list": [ 00:16:20.380 { 00:16:20.380 "name": null, 00:16:20.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.380 "is_configured": false, 00:16:20.380 "data_offset": 2048, 00:16:20.380 "data_size": 63488 00:16:20.380 }, 00:16:20.380 { 00:16:20.380 "name": "pt2", 00:16:20.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.380 "is_configured": true, 00:16:20.380 "data_offset": 2048, 00:16:20.380 "data_size": 63488 00:16:20.380 }, 00:16:20.380 { 00:16:20.380 "name": "pt3", 00:16:20.380 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:20.380 "is_configured": true, 00:16:20.380 "data_offset": 2048, 00:16:20.380 "data_size": 63488 00:16:20.380 }, 00:16:20.380 { 00:16:20.380 "name": null, 00:16:20.380 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:20.380 "is_configured": false, 00:16:20.380 "data_offset": 2048, 00:16:20.380 "data_size": 63488 00:16:20.380 } 00:16:20.380 ] 00:16:20.380 }' 00:16:20.380 11:54:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.380 11:54:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.948 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:20.948 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:20.948 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:20.948 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:20.948 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.948 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.948 [2024-11-27 11:54:47.040077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:20.948 [2024-11-27 11:54:47.040195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.948 [2024-11-27 11:54:47.040223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:20.948 [2024-11-27 11:54:47.040232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.948 [2024-11-27 11:54:47.040720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.948 [2024-11-27 11:54:47.040738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:20.948 [2024-11-27 11:54:47.040819] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:20.948 [2024-11-27 11:54:47.040847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:20.948 [2024-11-27 11:54:47.041017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:20.948 [2024-11-27 11:54:47.041033] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:20.948 [2024-11-27 11:54:47.041299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:20.949 [2024-11-27 11:54:47.048509] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:20.949 [2024-11-27 11:54:47.048535] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:20.949 [2024-11-27 11:54:47.048869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.949 pt4 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.949 "name": "raid_bdev1", 00:16:20.949 "uuid": "2f373570-f334-4b10-82ee-098e727b2fcd", 00:16:20.949 "strip_size_kb": 64, 00:16:20.949 "state": "online", 00:16:20.949 "raid_level": "raid5f", 00:16:20.949 "superblock": true, 00:16:20.949 "num_base_bdevs": 4, 00:16:20.949 "num_base_bdevs_discovered": 3, 00:16:20.949 "num_base_bdevs_operational": 3, 00:16:20.949 "base_bdevs_list": [ 00:16:20.949 { 00:16:20.949 "name": null, 00:16:20.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.949 "is_configured": false, 00:16:20.949 "data_offset": 2048, 00:16:20.949 "data_size": 63488 00:16:20.949 }, 00:16:20.949 { 00:16:20.949 "name": "pt2", 00:16:20.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.949 "is_configured": true, 00:16:20.949 "data_offset": 2048, 00:16:20.949 "data_size": 63488 00:16:20.949 }, 00:16:20.949 { 00:16:20.949 "name": "pt3", 00:16:20.949 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:20.949 "is_configured": true, 00:16:20.949 "data_offset": 2048, 00:16:20.949 "data_size": 63488 00:16:20.949 }, 00:16:20.949 { 00:16:20.949 "name": "pt4", 00:16:20.949 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:20.949 "is_configured": true, 00:16:20.949 "data_offset": 2048, 00:16:20.949 "data_size": 63488 00:16:20.949 } 00:16:20.949 ] 00:16:20.949 }' 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.949 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.208 [2024-11-27 11:54:47.512817] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.208 [2024-11-27 11:54:47.512931] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.208 [2024-11-27 11:54:47.513058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.208 [2024-11-27 11:54:47.513187] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.208 [2024-11-27 11:54:47.513241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.208 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.208 [2024-11-27 11:54:47.584702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:21.208 [2024-11-27 11:54:47.584833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.208 [2024-11-27 11:54:47.584913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:21.208 [2024-11-27 11:54:47.584957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.208 [2024-11-27 11:54:47.587473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.208 [2024-11-27 11:54:47.587558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:21.208 [2024-11-27 11:54:47.587694] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:21.208 [2024-11-27 11:54:47.587786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:21.208 [2024-11-27 11:54:47.588014] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:21.208 [2024-11-27 11:54:47.588084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.208 [2024-11-27 11:54:47.588151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:21.208 [2024-11-27 11:54:47.588271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:21.208 [2024-11-27 11:54:47.588436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:21.208 pt1 00:16:21.473 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.473 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:21.473 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:21.473 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.473 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.473 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.473 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.473 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.473 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.473 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.473 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.473 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.473 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.473 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.473 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.473 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.473 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.473 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.473 "name": "raid_bdev1", 00:16:21.473 "uuid": "2f373570-f334-4b10-82ee-098e727b2fcd", 00:16:21.473 "strip_size_kb": 64, 00:16:21.473 "state": "configuring", 00:16:21.473 "raid_level": "raid5f", 00:16:21.473 "superblock": true, 00:16:21.473 "num_base_bdevs": 4, 00:16:21.473 "num_base_bdevs_discovered": 2, 00:16:21.473 "num_base_bdevs_operational": 3, 00:16:21.473 "base_bdevs_list": [ 00:16:21.473 { 00:16:21.473 "name": null, 00:16:21.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.474 "is_configured": false, 00:16:21.474 "data_offset": 2048, 00:16:21.474 "data_size": 63488 00:16:21.474 }, 00:16:21.474 { 00:16:21.474 "name": "pt2", 00:16:21.474 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.474 "is_configured": true, 00:16:21.474 "data_offset": 2048, 00:16:21.474 "data_size": 63488 00:16:21.474 }, 00:16:21.474 { 00:16:21.474 "name": "pt3", 00:16:21.474 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:21.474 "is_configured": true, 00:16:21.474 "data_offset": 2048, 00:16:21.474 "data_size": 63488 00:16:21.474 }, 00:16:21.474 { 00:16:21.474 "name": null, 00:16:21.474 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:21.474 "is_configured": false, 00:16:21.474 "data_offset": 2048, 00:16:21.474 "data_size": 63488 00:16:21.474 } 00:16:21.474 ] 00:16:21.474 }' 00:16:21.474 11:54:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.474 11:54:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.734 [2024-11-27 11:54:48.047973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:21.734 [2024-11-27 11:54:48.048091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.734 [2024-11-27 11:54:48.048164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:21.734 [2024-11-27 11:54:48.048204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.734 [2024-11-27 11:54:48.048771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.734 [2024-11-27 11:54:48.048853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:21.734 [2024-11-27 11:54:48.048961] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:21.734 [2024-11-27 11:54:48.048991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:21.734 [2024-11-27 11:54:48.049179] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:21.734 [2024-11-27 11:54:48.049189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:21.734 [2024-11-27 11:54:48.049463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:21.734 [2024-11-27 11:54:48.057626] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:21.734 [2024-11-27 11:54:48.057652] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:21.734 [2024-11-27 11:54:48.057973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.734 pt4 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.734 "name": "raid_bdev1", 00:16:21.734 "uuid": "2f373570-f334-4b10-82ee-098e727b2fcd", 00:16:21.734 "strip_size_kb": 64, 00:16:21.734 "state": "online", 00:16:21.734 "raid_level": "raid5f", 00:16:21.734 "superblock": true, 00:16:21.734 "num_base_bdevs": 4, 00:16:21.734 "num_base_bdevs_discovered": 3, 00:16:21.734 "num_base_bdevs_operational": 3, 00:16:21.734 "base_bdevs_list": [ 00:16:21.734 { 00:16:21.734 "name": null, 00:16:21.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.734 "is_configured": false, 00:16:21.734 "data_offset": 2048, 00:16:21.734 "data_size": 63488 00:16:21.734 }, 00:16:21.734 { 00:16:21.734 "name": "pt2", 00:16:21.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.734 "is_configured": true, 00:16:21.734 "data_offset": 2048, 00:16:21.734 "data_size": 63488 00:16:21.734 }, 00:16:21.734 { 00:16:21.734 "name": "pt3", 00:16:21.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:21.734 "is_configured": true, 00:16:21.734 "data_offset": 2048, 00:16:21.734 "data_size": 63488 00:16:21.734 }, 00:16:21.734 { 00:16:21.734 "name": "pt4", 00:16:21.734 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:21.734 "is_configured": true, 00:16:21.734 "data_offset": 2048, 00:16:21.734 "data_size": 63488 00:16:21.734 } 00:16:21.734 ] 00:16:21.734 }' 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.734 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.304 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:22.304 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:22.304 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.304 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.304 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.304 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:22.304 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:22.304 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:22.304 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.304 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.305 [2024-11-27 11:54:48.566609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.305 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.305 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2f373570-f334-4b10-82ee-098e727b2fcd '!=' 2f373570-f334-4b10-82ee-098e727b2fcd ']' 00:16:22.305 11:54:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84191 00:16:22.305 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84191 ']' 00:16:22.305 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84191 00:16:22.305 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:22.305 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.305 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84191 00:16:22.305 killing process with pid 84191 00:16:22.305 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.305 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.305 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84191' 00:16:22.305 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84191 00:16:22.305 [2024-11-27 11:54:48.621680] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:22.305 [2024-11-27 11:54:48.621780] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.305 11:54:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84191 00:16:22.305 [2024-11-27 11:54:48.621877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.305 [2024-11-27 11:54:48.621896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:22.874 [2024-11-27 11:54:49.021937] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:23.813 11:54:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:23.813 00:16:23.813 real 0m8.394s 00:16:23.813 user 0m13.141s 00:16:23.813 sys 0m1.536s 00:16:23.813 ************************************ 00:16:23.813 END TEST raid5f_superblock_test 00:16:23.813 ************************************ 00:16:23.813 11:54:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.813 11:54:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.072 11:54:50 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:24.072 11:54:50 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:24.072 11:54:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:24.072 11:54:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.072 11:54:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:24.072 ************************************ 00:16:24.072 START TEST raid5f_rebuild_test 00:16:24.072 ************************************ 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84672 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84672 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84672 ']' 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.072 11:54:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.072 [2024-11-27 11:54:50.317640] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:16:24.072 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:24.072 Zero copy mechanism will not be used. 00:16:24.072 [2024-11-27 11:54:50.317859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84672 ] 00:16:24.331 [2024-11-27 11:54:50.492144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.331 [2024-11-27 11:54:50.602941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.590 [2024-11-27 11:54:50.803156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.590 [2024-11-27 11:54:50.803187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.850 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.850 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:24.850 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:24.850 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:24.850 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.851 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.851 BaseBdev1_malloc 00:16:24.851 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.851 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:24.851 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.851 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.851 [2024-11-27 11:54:51.182104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:24.851 [2024-11-27 11:54:51.182205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.851 [2024-11-27 11:54:51.182232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:24.851 [2024-11-27 11:54:51.182244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.851 [2024-11-27 11:54:51.184334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.851 [2024-11-27 11:54:51.184374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:24.851 BaseBdev1 00:16:24.851 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.851 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:24.851 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:24.851 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.851 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.851 BaseBdev2_malloc 00:16:24.851 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.851 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:24.851 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.851 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.111 [2024-11-27 11:54:51.236643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:25.111 [2024-11-27 11:54:51.236761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.111 [2024-11-27 11:54:51.236790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:25.111 [2024-11-27 11:54:51.236803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.111 [2024-11-27 11:54:51.238932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.111 [2024-11-27 11:54:51.238967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:25.111 BaseBdev2 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.111 BaseBdev3_malloc 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.111 [2024-11-27 11:54:51.301617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:25.111 [2024-11-27 11:54:51.301667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.111 [2024-11-27 11:54:51.301705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:25.111 [2024-11-27 11:54:51.301715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.111 [2024-11-27 11:54:51.303703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.111 [2024-11-27 11:54:51.303742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:25.111 BaseBdev3 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.111 BaseBdev4_malloc 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.111 [2024-11-27 11:54:51.354673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:25.111 [2024-11-27 11:54:51.354730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.111 [2024-11-27 11:54:51.354749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:25.111 [2024-11-27 11:54:51.354760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.111 [2024-11-27 11:54:51.356823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.111 [2024-11-27 11:54:51.356866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:25.111 BaseBdev4 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.111 spare_malloc 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.111 spare_delay 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:25.111 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.112 [2024-11-27 11:54:51.419543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:25.112 [2024-11-27 11:54:51.419592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.112 [2024-11-27 11:54:51.419608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:25.112 [2024-11-27 11:54:51.419619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.112 [2024-11-27 11:54:51.421698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.112 [2024-11-27 11:54:51.421737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:25.112 spare 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.112 [2024-11-27 11:54:51.431572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.112 [2024-11-27 11:54:51.433409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.112 [2024-11-27 11:54:51.433468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.112 [2024-11-27 11:54:51.433516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:25.112 [2024-11-27 11:54:51.433596] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:25.112 [2024-11-27 11:54:51.433607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:25.112 [2024-11-27 11:54:51.433837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:25.112 [2024-11-27 11:54:51.440561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:25.112 [2024-11-27 11:54:51.440581] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:25.112 [2024-11-27 11:54:51.440754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.112 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.112 "name": "raid_bdev1", 00:16:25.112 "uuid": "3de35909-e30f-4a87-923b-83e30377390d", 00:16:25.112 "strip_size_kb": 64, 00:16:25.112 "state": "online", 00:16:25.112 "raid_level": "raid5f", 00:16:25.112 "superblock": false, 00:16:25.112 "num_base_bdevs": 4, 00:16:25.112 "num_base_bdevs_discovered": 4, 00:16:25.112 "num_base_bdevs_operational": 4, 00:16:25.112 "base_bdevs_list": [ 00:16:25.112 { 00:16:25.112 "name": "BaseBdev1", 00:16:25.112 "uuid": "e3cb8ab6-30ac-5af4-91fd-174ebf369ba2", 00:16:25.112 "is_configured": true, 00:16:25.112 "data_offset": 0, 00:16:25.112 "data_size": 65536 00:16:25.112 }, 00:16:25.112 { 00:16:25.112 "name": "BaseBdev2", 00:16:25.112 "uuid": "60773703-6575-5103-848f-75c106e66c7f", 00:16:25.112 "is_configured": true, 00:16:25.112 "data_offset": 0, 00:16:25.112 "data_size": 65536 00:16:25.112 }, 00:16:25.112 { 00:16:25.112 "name": "BaseBdev3", 00:16:25.112 "uuid": "9f19738e-cc63-55d6-b36d-e5e8f347accc", 00:16:25.112 "is_configured": true, 00:16:25.112 "data_offset": 0, 00:16:25.112 "data_size": 65536 00:16:25.112 }, 00:16:25.112 { 00:16:25.112 "name": "BaseBdev4", 00:16:25.112 "uuid": "47ce7d73-2268-51c7-94fa-a69cc345e200", 00:16:25.112 "is_configured": true, 00:16:25.112 "data_offset": 0, 00:16:25.112 "data_size": 65536 00:16:25.112 } 00:16:25.112 ] 00:16:25.112 }' 00:16:25.372 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.372 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:25.631 [2024-11-27 11:54:51.896067] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:25.631 11:54:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:25.890 [2024-11-27 11:54:52.143464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:25.890 /dev/nbd0 00:16:25.890 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:25.891 1+0 records in 00:16:25.891 1+0 records out 00:16:25.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433188 s, 9.5 MB/s 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:25.891 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:26.509 512+0 records in 00:16:26.509 512+0 records out 00:16:26.509 100663296 bytes (101 MB, 96 MiB) copied, 0.470086 s, 214 MB/s 00:16:26.509 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:26.509 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.509 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:26.509 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:26.509 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:26.509 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:26.509 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:26.784 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:26.784 [2024-11-27 11:54:52.902070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.784 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:26.784 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:26.784 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:26.784 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:26.784 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:26.784 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:26.784 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:26.784 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:26.784 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.784 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.785 [2024-11-27 11:54:52.916433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.785 "name": "raid_bdev1", 00:16:26.785 "uuid": "3de35909-e30f-4a87-923b-83e30377390d", 00:16:26.785 "strip_size_kb": 64, 00:16:26.785 "state": "online", 00:16:26.785 "raid_level": "raid5f", 00:16:26.785 "superblock": false, 00:16:26.785 "num_base_bdevs": 4, 00:16:26.785 "num_base_bdevs_discovered": 3, 00:16:26.785 "num_base_bdevs_operational": 3, 00:16:26.785 "base_bdevs_list": [ 00:16:26.785 { 00:16:26.785 "name": null, 00:16:26.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.785 "is_configured": false, 00:16:26.785 "data_offset": 0, 00:16:26.785 "data_size": 65536 00:16:26.785 }, 00:16:26.785 { 00:16:26.785 "name": "BaseBdev2", 00:16:26.785 "uuid": "60773703-6575-5103-848f-75c106e66c7f", 00:16:26.785 "is_configured": true, 00:16:26.785 "data_offset": 0, 00:16:26.785 "data_size": 65536 00:16:26.785 }, 00:16:26.785 { 00:16:26.785 "name": "BaseBdev3", 00:16:26.785 "uuid": "9f19738e-cc63-55d6-b36d-e5e8f347accc", 00:16:26.785 "is_configured": true, 00:16:26.785 "data_offset": 0, 00:16:26.785 "data_size": 65536 00:16:26.785 }, 00:16:26.785 { 00:16:26.785 "name": "BaseBdev4", 00:16:26.785 "uuid": "47ce7d73-2268-51c7-94fa-a69cc345e200", 00:16:26.785 "is_configured": true, 00:16:26.785 "data_offset": 0, 00:16:26.785 "data_size": 65536 00:16:26.785 } 00:16:26.785 ] 00:16:26.785 }' 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.785 11:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.045 11:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:27.045 11:54:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.045 11:54:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.045 [2024-11-27 11:54:53.375657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.045 [2024-11-27 11:54:53.391302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:27.045 11:54:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.045 11:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:27.045 [2024-11-27 11:54:53.400711] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:28.426 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.426 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.426 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.426 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.426 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.426 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.426 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.426 11:54:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.426 11:54:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.426 11:54:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.426 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.426 "name": "raid_bdev1", 00:16:28.426 "uuid": "3de35909-e30f-4a87-923b-83e30377390d", 00:16:28.426 "strip_size_kb": 64, 00:16:28.426 "state": "online", 00:16:28.426 "raid_level": "raid5f", 00:16:28.426 "superblock": false, 00:16:28.426 "num_base_bdevs": 4, 00:16:28.426 "num_base_bdevs_discovered": 4, 00:16:28.426 "num_base_bdevs_operational": 4, 00:16:28.426 "process": { 00:16:28.426 "type": "rebuild", 00:16:28.426 "target": "spare", 00:16:28.426 "progress": { 00:16:28.426 "blocks": 19200, 00:16:28.426 "percent": 9 00:16:28.426 } 00:16:28.426 }, 00:16:28.426 "base_bdevs_list": [ 00:16:28.426 { 00:16:28.426 "name": "spare", 00:16:28.426 "uuid": "b4d97d41-c0e3-5c24-ad5a-ed95397f4083", 00:16:28.426 "is_configured": true, 00:16:28.426 "data_offset": 0, 00:16:28.426 "data_size": 65536 00:16:28.426 }, 00:16:28.426 { 00:16:28.426 "name": "BaseBdev2", 00:16:28.426 "uuid": "60773703-6575-5103-848f-75c106e66c7f", 00:16:28.426 "is_configured": true, 00:16:28.426 "data_offset": 0, 00:16:28.426 "data_size": 65536 00:16:28.426 }, 00:16:28.426 { 00:16:28.426 "name": "BaseBdev3", 00:16:28.426 "uuid": "9f19738e-cc63-55d6-b36d-e5e8f347accc", 00:16:28.426 "is_configured": true, 00:16:28.426 "data_offset": 0, 00:16:28.426 "data_size": 65536 00:16:28.426 }, 00:16:28.426 { 00:16:28.426 "name": "BaseBdev4", 00:16:28.426 "uuid": "47ce7d73-2268-51c7-94fa-a69cc345e200", 00:16:28.426 "is_configured": true, 00:16:28.426 "data_offset": 0, 00:16:28.426 "data_size": 65536 00:16:28.426 } 00:16:28.426 ] 00:16:28.426 }' 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.427 [2024-11-27 11:54:54.551442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:28.427 [2024-11-27 11:54:54.607460] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:28.427 [2024-11-27 11:54:54.607534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.427 [2024-11-27 11:54:54.607551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:28.427 [2024-11-27 11:54:54.607563] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.427 "name": "raid_bdev1", 00:16:28.427 "uuid": "3de35909-e30f-4a87-923b-83e30377390d", 00:16:28.427 "strip_size_kb": 64, 00:16:28.427 "state": "online", 00:16:28.427 "raid_level": "raid5f", 00:16:28.427 "superblock": false, 00:16:28.427 "num_base_bdevs": 4, 00:16:28.427 "num_base_bdevs_discovered": 3, 00:16:28.427 "num_base_bdevs_operational": 3, 00:16:28.427 "base_bdevs_list": [ 00:16:28.427 { 00:16:28.427 "name": null, 00:16:28.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.427 "is_configured": false, 00:16:28.427 "data_offset": 0, 00:16:28.427 "data_size": 65536 00:16:28.427 }, 00:16:28.427 { 00:16:28.427 "name": "BaseBdev2", 00:16:28.427 "uuid": "60773703-6575-5103-848f-75c106e66c7f", 00:16:28.427 "is_configured": true, 00:16:28.427 "data_offset": 0, 00:16:28.427 "data_size": 65536 00:16:28.427 }, 00:16:28.427 { 00:16:28.427 "name": "BaseBdev3", 00:16:28.427 "uuid": "9f19738e-cc63-55d6-b36d-e5e8f347accc", 00:16:28.427 "is_configured": true, 00:16:28.427 "data_offset": 0, 00:16:28.427 "data_size": 65536 00:16:28.427 }, 00:16:28.427 { 00:16:28.427 "name": "BaseBdev4", 00:16:28.427 "uuid": "47ce7d73-2268-51c7-94fa-a69cc345e200", 00:16:28.427 "is_configured": true, 00:16:28.427 "data_offset": 0, 00:16:28.427 "data_size": 65536 00:16:28.427 } 00:16:28.427 ] 00:16:28.427 }' 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.427 11:54:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.996 11:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:28.996 11:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.996 11:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:28.996 11:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:28.996 11:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.996 11:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.996 11:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.996 11:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.996 11:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.996 11:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.996 11:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.996 "name": "raid_bdev1", 00:16:28.996 "uuid": "3de35909-e30f-4a87-923b-83e30377390d", 00:16:28.996 "strip_size_kb": 64, 00:16:28.996 "state": "online", 00:16:28.996 "raid_level": "raid5f", 00:16:28.996 "superblock": false, 00:16:28.996 "num_base_bdevs": 4, 00:16:28.996 "num_base_bdevs_discovered": 3, 00:16:28.996 "num_base_bdevs_operational": 3, 00:16:28.996 "base_bdevs_list": [ 00:16:28.996 { 00:16:28.996 "name": null, 00:16:28.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.996 "is_configured": false, 00:16:28.996 "data_offset": 0, 00:16:28.996 "data_size": 65536 00:16:28.996 }, 00:16:28.996 { 00:16:28.996 "name": "BaseBdev2", 00:16:28.996 "uuid": "60773703-6575-5103-848f-75c106e66c7f", 00:16:28.996 "is_configured": true, 00:16:28.996 "data_offset": 0, 00:16:28.996 "data_size": 65536 00:16:28.996 }, 00:16:28.996 { 00:16:28.996 "name": "BaseBdev3", 00:16:28.996 "uuid": "9f19738e-cc63-55d6-b36d-e5e8f347accc", 00:16:28.996 "is_configured": true, 00:16:28.996 "data_offset": 0, 00:16:28.996 "data_size": 65536 00:16:28.996 }, 00:16:28.996 { 00:16:28.996 "name": "BaseBdev4", 00:16:28.996 "uuid": "47ce7d73-2268-51c7-94fa-a69cc345e200", 00:16:28.996 "is_configured": true, 00:16:28.996 "data_offset": 0, 00:16:28.996 "data_size": 65536 00:16:28.996 } 00:16:28.996 ] 00:16:28.996 }' 00:16:28.996 11:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.996 11:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:28.996 11:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.996 11:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:28.996 11:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:28.996 11:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.996 11:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.996 [2024-11-27 11:54:55.253052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:28.996 [2024-11-27 11:54:55.268662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:28.997 11:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.997 11:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:28.997 [2024-11-27 11:54:55.278222] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:29.935 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.935 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.935 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.935 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.935 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.935 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.935 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.935 11:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.935 11:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.935 11:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.195 "name": "raid_bdev1", 00:16:30.195 "uuid": "3de35909-e30f-4a87-923b-83e30377390d", 00:16:30.195 "strip_size_kb": 64, 00:16:30.195 "state": "online", 00:16:30.195 "raid_level": "raid5f", 00:16:30.195 "superblock": false, 00:16:30.195 "num_base_bdevs": 4, 00:16:30.195 "num_base_bdevs_discovered": 4, 00:16:30.195 "num_base_bdevs_operational": 4, 00:16:30.195 "process": { 00:16:30.195 "type": "rebuild", 00:16:30.195 "target": "spare", 00:16:30.195 "progress": { 00:16:30.195 "blocks": 19200, 00:16:30.195 "percent": 9 00:16:30.195 } 00:16:30.195 }, 00:16:30.195 "base_bdevs_list": [ 00:16:30.195 { 00:16:30.195 "name": "spare", 00:16:30.195 "uuid": "b4d97d41-c0e3-5c24-ad5a-ed95397f4083", 00:16:30.195 "is_configured": true, 00:16:30.195 "data_offset": 0, 00:16:30.195 "data_size": 65536 00:16:30.195 }, 00:16:30.195 { 00:16:30.195 "name": "BaseBdev2", 00:16:30.195 "uuid": "60773703-6575-5103-848f-75c106e66c7f", 00:16:30.195 "is_configured": true, 00:16:30.195 "data_offset": 0, 00:16:30.195 "data_size": 65536 00:16:30.195 }, 00:16:30.195 { 00:16:30.195 "name": "BaseBdev3", 00:16:30.195 "uuid": "9f19738e-cc63-55d6-b36d-e5e8f347accc", 00:16:30.195 "is_configured": true, 00:16:30.195 "data_offset": 0, 00:16:30.195 "data_size": 65536 00:16:30.195 }, 00:16:30.195 { 00:16:30.195 "name": "BaseBdev4", 00:16:30.195 "uuid": "47ce7d73-2268-51c7-94fa-a69cc345e200", 00:16:30.195 "is_configured": true, 00:16:30.195 "data_offset": 0, 00:16:30.195 "data_size": 65536 00:16:30.195 } 00:16:30.195 ] 00:16:30.195 }' 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=631 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.195 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.195 "name": "raid_bdev1", 00:16:30.195 "uuid": "3de35909-e30f-4a87-923b-83e30377390d", 00:16:30.195 "strip_size_kb": 64, 00:16:30.195 "state": "online", 00:16:30.195 "raid_level": "raid5f", 00:16:30.195 "superblock": false, 00:16:30.195 "num_base_bdevs": 4, 00:16:30.195 "num_base_bdevs_discovered": 4, 00:16:30.195 "num_base_bdevs_operational": 4, 00:16:30.195 "process": { 00:16:30.195 "type": "rebuild", 00:16:30.195 "target": "spare", 00:16:30.195 "progress": { 00:16:30.195 "blocks": 21120, 00:16:30.195 "percent": 10 00:16:30.195 } 00:16:30.195 }, 00:16:30.195 "base_bdevs_list": [ 00:16:30.195 { 00:16:30.195 "name": "spare", 00:16:30.195 "uuid": "b4d97d41-c0e3-5c24-ad5a-ed95397f4083", 00:16:30.195 "is_configured": true, 00:16:30.195 "data_offset": 0, 00:16:30.195 "data_size": 65536 00:16:30.195 }, 00:16:30.195 { 00:16:30.195 "name": "BaseBdev2", 00:16:30.195 "uuid": "60773703-6575-5103-848f-75c106e66c7f", 00:16:30.195 "is_configured": true, 00:16:30.195 "data_offset": 0, 00:16:30.195 "data_size": 65536 00:16:30.195 }, 00:16:30.195 { 00:16:30.195 "name": "BaseBdev3", 00:16:30.195 "uuid": "9f19738e-cc63-55d6-b36d-e5e8f347accc", 00:16:30.195 "is_configured": true, 00:16:30.195 "data_offset": 0, 00:16:30.195 "data_size": 65536 00:16:30.195 }, 00:16:30.195 { 00:16:30.196 "name": "BaseBdev4", 00:16:30.196 "uuid": "47ce7d73-2268-51c7-94fa-a69cc345e200", 00:16:30.196 "is_configured": true, 00:16:30.196 "data_offset": 0, 00:16:30.196 "data_size": 65536 00:16:30.196 } 00:16:30.196 ] 00:16:30.196 }' 00:16:30.196 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.196 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.196 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.196 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.196 11:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:31.575 11:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.575 11:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.575 11:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.575 11:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.575 11:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.575 11:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.575 11:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.575 11:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.575 11:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.575 11:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.575 11:54:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.575 11:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.575 "name": "raid_bdev1", 00:16:31.575 "uuid": "3de35909-e30f-4a87-923b-83e30377390d", 00:16:31.575 "strip_size_kb": 64, 00:16:31.575 "state": "online", 00:16:31.575 "raid_level": "raid5f", 00:16:31.575 "superblock": false, 00:16:31.575 "num_base_bdevs": 4, 00:16:31.575 "num_base_bdevs_discovered": 4, 00:16:31.575 "num_base_bdevs_operational": 4, 00:16:31.575 "process": { 00:16:31.575 "type": "rebuild", 00:16:31.575 "target": "spare", 00:16:31.575 "progress": { 00:16:31.575 "blocks": 44160, 00:16:31.575 "percent": 22 00:16:31.575 } 00:16:31.575 }, 00:16:31.575 "base_bdevs_list": [ 00:16:31.575 { 00:16:31.576 "name": "spare", 00:16:31.576 "uuid": "b4d97d41-c0e3-5c24-ad5a-ed95397f4083", 00:16:31.576 "is_configured": true, 00:16:31.576 "data_offset": 0, 00:16:31.576 "data_size": 65536 00:16:31.576 }, 00:16:31.576 { 00:16:31.576 "name": "BaseBdev2", 00:16:31.576 "uuid": "60773703-6575-5103-848f-75c106e66c7f", 00:16:31.576 "is_configured": true, 00:16:31.576 "data_offset": 0, 00:16:31.576 "data_size": 65536 00:16:31.576 }, 00:16:31.576 { 00:16:31.576 "name": "BaseBdev3", 00:16:31.576 "uuid": "9f19738e-cc63-55d6-b36d-e5e8f347accc", 00:16:31.576 "is_configured": true, 00:16:31.576 "data_offset": 0, 00:16:31.576 "data_size": 65536 00:16:31.576 }, 00:16:31.576 { 00:16:31.576 "name": "BaseBdev4", 00:16:31.576 "uuid": "47ce7d73-2268-51c7-94fa-a69cc345e200", 00:16:31.576 "is_configured": true, 00:16:31.576 "data_offset": 0, 00:16:31.576 "data_size": 65536 00:16:31.576 } 00:16:31.576 ] 00:16:31.576 }' 00:16:31.576 11:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.576 11:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.576 11:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.576 11:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.576 11:54:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:32.557 11:54:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.557 11:54:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.557 11:54:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.557 11:54:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.557 11:54:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.557 11:54:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.557 11:54:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.557 11:54:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.557 11:54:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.557 11:54:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.557 11:54:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.557 11:54:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.557 "name": "raid_bdev1", 00:16:32.557 "uuid": "3de35909-e30f-4a87-923b-83e30377390d", 00:16:32.557 "strip_size_kb": 64, 00:16:32.557 "state": "online", 00:16:32.557 "raid_level": "raid5f", 00:16:32.557 "superblock": false, 00:16:32.557 "num_base_bdevs": 4, 00:16:32.557 "num_base_bdevs_discovered": 4, 00:16:32.557 "num_base_bdevs_operational": 4, 00:16:32.557 "process": { 00:16:32.557 "type": "rebuild", 00:16:32.557 "target": "spare", 00:16:32.557 "progress": { 00:16:32.557 "blocks": 65280, 00:16:32.557 "percent": 33 00:16:32.557 } 00:16:32.557 }, 00:16:32.557 "base_bdevs_list": [ 00:16:32.557 { 00:16:32.557 "name": "spare", 00:16:32.557 "uuid": "b4d97d41-c0e3-5c24-ad5a-ed95397f4083", 00:16:32.557 "is_configured": true, 00:16:32.557 "data_offset": 0, 00:16:32.557 "data_size": 65536 00:16:32.557 }, 00:16:32.557 { 00:16:32.557 "name": "BaseBdev2", 00:16:32.557 "uuid": "60773703-6575-5103-848f-75c106e66c7f", 00:16:32.557 "is_configured": true, 00:16:32.557 "data_offset": 0, 00:16:32.557 "data_size": 65536 00:16:32.557 }, 00:16:32.557 { 00:16:32.557 "name": "BaseBdev3", 00:16:32.557 "uuid": "9f19738e-cc63-55d6-b36d-e5e8f347accc", 00:16:32.557 "is_configured": true, 00:16:32.557 "data_offset": 0, 00:16:32.557 "data_size": 65536 00:16:32.557 }, 00:16:32.557 { 00:16:32.557 "name": "BaseBdev4", 00:16:32.557 "uuid": "47ce7d73-2268-51c7-94fa-a69cc345e200", 00:16:32.557 "is_configured": true, 00:16:32.557 "data_offset": 0, 00:16:32.557 "data_size": 65536 00:16:32.557 } 00:16:32.557 ] 00:16:32.557 }' 00:16:32.557 11:54:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.557 11:54:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.557 11:54:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.557 11:54:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.557 11:54:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.935 11:54:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.935 11:54:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.935 11:54:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.935 11:54:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.935 11:54:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.935 11:54:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.935 11:54:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.935 11:54:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.935 11:54:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.935 11:54:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.935 11:54:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.935 11:54:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.935 "name": "raid_bdev1", 00:16:33.935 "uuid": "3de35909-e30f-4a87-923b-83e30377390d", 00:16:33.935 "strip_size_kb": 64, 00:16:33.935 "state": "online", 00:16:33.935 "raid_level": "raid5f", 00:16:33.935 "superblock": false, 00:16:33.935 "num_base_bdevs": 4, 00:16:33.935 "num_base_bdevs_discovered": 4, 00:16:33.935 "num_base_bdevs_operational": 4, 00:16:33.935 "process": { 00:16:33.935 "type": "rebuild", 00:16:33.935 "target": "spare", 00:16:33.935 "progress": { 00:16:33.935 "blocks": 86400, 00:16:33.935 "percent": 43 00:16:33.935 } 00:16:33.935 }, 00:16:33.935 "base_bdevs_list": [ 00:16:33.935 { 00:16:33.935 "name": "spare", 00:16:33.935 "uuid": "b4d97d41-c0e3-5c24-ad5a-ed95397f4083", 00:16:33.935 "is_configured": true, 00:16:33.935 "data_offset": 0, 00:16:33.935 "data_size": 65536 00:16:33.935 }, 00:16:33.935 { 00:16:33.935 "name": "BaseBdev2", 00:16:33.935 "uuid": "60773703-6575-5103-848f-75c106e66c7f", 00:16:33.935 "is_configured": true, 00:16:33.935 "data_offset": 0, 00:16:33.935 "data_size": 65536 00:16:33.935 }, 00:16:33.935 { 00:16:33.935 "name": "BaseBdev3", 00:16:33.935 "uuid": "9f19738e-cc63-55d6-b36d-e5e8f347accc", 00:16:33.935 "is_configured": true, 00:16:33.935 "data_offset": 0, 00:16:33.935 "data_size": 65536 00:16:33.935 }, 00:16:33.935 { 00:16:33.935 "name": "BaseBdev4", 00:16:33.935 "uuid": "47ce7d73-2268-51c7-94fa-a69cc345e200", 00:16:33.935 "is_configured": true, 00:16:33.935 "data_offset": 0, 00:16:33.935 "data_size": 65536 00:16:33.935 } 00:16:33.935 ] 00:16:33.935 }' 00:16:33.935 11:54:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.935 11:54:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.935 11:54:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.935 11:54:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.935 11:54:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:34.876 11:55:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:34.876 11:55:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.876 11:55:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.876 11:55:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.876 11:55:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.876 11:55:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.876 11:55:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.876 11:55:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.876 11:55:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.876 11:55:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.876 11:55:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.876 11:55:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.876 "name": "raid_bdev1", 00:16:34.876 "uuid": "3de35909-e30f-4a87-923b-83e30377390d", 00:16:34.876 "strip_size_kb": 64, 00:16:34.876 "state": "online", 00:16:34.876 "raid_level": "raid5f", 00:16:34.876 "superblock": false, 00:16:34.876 "num_base_bdevs": 4, 00:16:34.876 "num_base_bdevs_discovered": 4, 00:16:34.876 "num_base_bdevs_operational": 4, 00:16:34.876 "process": { 00:16:34.876 "type": "rebuild", 00:16:34.876 "target": "spare", 00:16:34.876 "progress": { 00:16:34.876 "blocks": 107520, 00:16:34.876 "percent": 54 00:16:34.876 } 00:16:34.876 }, 00:16:34.876 "base_bdevs_list": [ 00:16:34.876 { 00:16:34.876 "name": "spare", 00:16:34.876 "uuid": "b4d97d41-c0e3-5c24-ad5a-ed95397f4083", 00:16:34.876 "is_configured": true, 00:16:34.876 "data_offset": 0, 00:16:34.876 "data_size": 65536 00:16:34.876 }, 00:16:34.876 { 00:16:34.876 "name": "BaseBdev2", 00:16:34.876 "uuid": "60773703-6575-5103-848f-75c106e66c7f", 00:16:34.876 "is_configured": true, 00:16:34.876 "data_offset": 0, 00:16:34.876 "data_size": 65536 00:16:34.876 }, 00:16:34.876 { 00:16:34.876 "name": "BaseBdev3", 00:16:34.876 "uuid": "9f19738e-cc63-55d6-b36d-e5e8f347accc", 00:16:34.876 "is_configured": true, 00:16:34.876 "data_offset": 0, 00:16:34.876 "data_size": 65536 00:16:34.876 }, 00:16:34.876 { 00:16:34.876 "name": "BaseBdev4", 00:16:34.876 "uuid": "47ce7d73-2268-51c7-94fa-a69cc345e200", 00:16:34.876 "is_configured": true, 00:16:34.876 "data_offset": 0, 00:16:34.876 "data_size": 65536 00:16:34.876 } 00:16:34.876 ] 00:16:34.876 }' 00:16:34.876 11:55:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.876 11:55:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.876 11:55:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.876 11:55:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.876 11:55:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:35.815 11:55:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.815 11:55:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.815 11:55:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.815 11:55:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.815 11:55:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.815 11:55:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.815 11:55:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.815 11:55:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.815 11:55:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.815 11:55:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.815 11:55:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.815 11:55:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.815 "name": "raid_bdev1", 00:16:35.815 "uuid": "3de35909-e30f-4a87-923b-83e30377390d", 00:16:35.815 "strip_size_kb": 64, 00:16:35.815 "state": "online", 00:16:35.815 "raid_level": "raid5f", 00:16:35.815 "superblock": false, 00:16:35.815 "num_base_bdevs": 4, 00:16:35.815 "num_base_bdevs_discovered": 4, 00:16:35.815 "num_base_bdevs_operational": 4, 00:16:35.815 "process": { 00:16:35.815 "type": "rebuild", 00:16:35.815 "target": "spare", 00:16:35.815 "progress": { 00:16:35.815 "blocks": 130560, 00:16:35.815 "percent": 66 00:16:35.815 } 00:16:35.815 }, 00:16:35.815 "base_bdevs_list": [ 00:16:35.815 { 00:16:35.815 "name": "spare", 00:16:35.815 "uuid": "b4d97d41-c0e3-5c24-ad5a-ed95397f4083", 00:16:35.815 "is_configured": true, 00:16:35.815 "data_offset": 0, 00:16:35.815 "data_size": 65536 00:16:35.815 }, 00:16:35.815 { 00:16:35.815 "name": "BaseBdev2", 00:16:35.815 "uuid": "60773703-6575-5103-848f-75c106e66c7f", 00:16:35.815 "is_configured": true, 00:16:35.815 "data_offset": 0, 00:16:35.815 "data_size": 65536 00:16:35.815 }, 00:16:35.815 { 00:16:35.816 "name": "BaseBdev3", 00:16:35.816 "uuid": "9f19738e-cc63-55d6-b36d-e5e8f347accc", 00:16:35.816 "is_configured": true, 00:16:35.816 "data_offset": 0, 00:16:35.816 "data_size": 65536 00:16:35.816 }, 00:16:35.816 { 00:16:35.816 "name": "BaseBdev4", 00:16:35.816 "uuid": "47ce7d73-2268-51c7-94fa-a69cc345e200", 00:16:35.816 "is_configured": true, 00:16:35.816 "data_offset": 0, 00:16:35.816 "data_size": 65536 00:16:35.816 } 00:16:35.816 ] 00:16:35.816 }' 00:16:35.816 11:55:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.075 11:55:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.075 11:55:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.075 11:55:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.075 11:55:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.037 11:55:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.037 11:55:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.037 11:55:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.037 11:55:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.037 11:55:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.037 11:55:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.037 11:55:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.037 11:55:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.037 11:55:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.037 11:55:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.037 11:55:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.037 11:55:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.037 "name": "raid_bdev1", 00:16:37.037 "uuid": "3de35909-e30f-4a87-923b-83e30377390d", 00:16:37.037 "strip_size_kb": 64, 00:16:37.037 "state": "online", 00:16:37.037 "raid_level": "raid5f", 00:16:37.037 "superblock": false, 00:16:37.037 "num_base_bdevs": 4, 00:16:37.037 "num_base_bdevs_discovered": 4, 00:16:37.037 "num_base_bdevs_operational": 4, 00:16:37.037 "process": { 00:16:37.037 "type": "rebuild", 00:16:37.037 "target": "spare", 00:16:37.037 "progress": { 00:16:37.037 "blocks": 151680, 00:16:37.037 "percent": 77 00:16:37.037 } 00:16:37.037 }, 00:16:37.037 "base_bdevs_list": [ 00:16:37.037 { 00:16:37.037 "name": "spare", 00:16:37.037 "uuid": "b4d97d41-c0e3-5c24-ad5a-ed95397f4083", 00:16:37.037 "is_configured": true, 00:16:37.037 "data_offset": 0, 00:16:37.037 "data_size": 65536 00:16:37.037 }, 00:16:37.037 { 00:16:37.037 "name": "BaseBdev2", 00:16:37.037 "uuid": "60773703-6575-5103-848f-75c106e66c7f", 00:16:37.037 "is_configured": true, 00:16:37.037 "data_offset": 0, 00:16:37.037 "data_size": 65536 00:16:37.037 }, 00:16:37.037 { 00:16:37.037 "name": "BaseBdev3", 00:16:37.037 "uuid": "9f19738e-cc63-55d6-b36d-e5e8f347accc", 00:16:37.037 "is_configured": true, 00:16:37.037 "data_offset": 0, 00:16:37.037 "data_size": 65536 00:16:37.037 }, 00:16:37.037 { 00:16:37.037 "name": "BaseBdev4", 00:16:37.037 "uuid": "47ce7d73-2268-51c7-94fa-a69cc345e200", 00:16:37.037 "is_configured": true, 00:16:37.037 "data_offset": 0, 00:16:37.037 "data_size": 65536 00:16:37.037 } 00:16:37.037 ] 00:16:37.037 }' 00:16:37.037 11:55:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.037 11:55:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.037 11:55:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.308 11:55:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.308 11:55:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:38.246 11:55:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.246 11:55:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.246 11:55:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.246 11:55:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.246 11:55:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.246 11:55:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.246 11:55:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.246 11:55:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.246 11:55:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.246 11:55:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.246 11:55:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.246 11:55:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.246 "name": "raid_bdev1", 00:16:38.246 "uuid": "3de35909-e30f-4a87-923b-83e30377390d", 00:16:38.246 "strip_size_kb": 64, 00:16:38.246 "state": "online", 00:16:38.246 "raid_level": "raid5f", 00:16:38.246 "superblock": false, 00:16:38.246 "num_base_bdevs": 4, 00:16:38.246 "num_base_bdevs_discovered": 4, 00:16:38.246 "num_base_bdevs_operational": 4, 00:16:38.246 "process": { 00:16:38.246 "type": "rebuild", 00:16:38.246 "target": "spare", 00:16:38.246 "progress": { 00:16:38.246 "blocks": 174720, 00:16:38.246 "percent": 88 00:16:38.246 } 00:16:38.246 }, 00:16:38.246 "base_bdevs_list": [ 00:16:38.246 { 00:16:38.246 "name": "spare", 00:16:38.246 "uuid": "b4d97d41-c0e3-5c24-ad5a-ed95397f4083", 00:16:38.246 "is_configured": true, 00:16:38.246 "data_offset": 0, 00:16:38.246 "data_size": 65536 00:16:38.246 }, 00:16:38.246 { 00:16:38.246 "name": "BaseBdev2", 00:16:38.246 "uuid": "60773703-6575-5103-848f-75c106e66c7f", 00:16:38.246 "is_configured": true, 00:16:38.246 "data_offset": 0, 00:16:38.246 "data_size": 65536 00:16:38.246 }, 00:16:38.246 { 00:16:38.246 "name": "BaseBdev3", 00:16:38.246 "uuid": "9f19738e-cc63-55d6-b36d-e5e8f347accc", 00:16:38.246 "is_configured": true, 00:16:38.246 "data_offset": 0, 00:16:38.246 "data_size": 65536 00:16:38.246 }, 00:16:38.246 { 00:16:38.246 "name": "BaseBdev4", 00:16:38.246 "uuid": "47ce7d73-2268-51c7-94fa-a69cc345e200", 00:16:38.246 "is_configured": true, 00:16:38.246 "data_offset": 0, 00:16:38.246 "data_size": 65536 00:16:38.246 } 00:16:38.246 ] 00:16:38.246 }' 00:16:38.246 11:55:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.246 11:55:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.246 11:55:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.246 11:55:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.246 11:55:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:39.623 11:55:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.623 11:55:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.623 11:55:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.623 11:55:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.623 11:55:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.623 11:55:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.623 11:55:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.623 11:55:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.623 11:55:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.623 11:55:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.623 11:55:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.623 11:55:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.623 "name": "raid_bdev1", 00:16:39.623 "uuid": "3de35909-e30f-4a87-923b-83e30377390d", 00:16:39.623 "strip_size_kb": 64, 00:16:39.623 "state": "online", 00:16:39.623 "raid_level": "raid5f", 00:16:39.623 "superblock": false, 00:16:39.623 "num_base_bdevs": 4, 00:16:39.623 "num_base_bdevs_discovered": 4, 00:16:39.623 "num_base_bdevs_operational": 4, 00:16:39.623 "process": { 00:16:39.623 "type": "rebuild", 00:16:39.623 "target": "spare", 00:16:39.623 "progress": { 00:16:39.623 "blocks": 195840, 00:16:39.623 "percent": 99 00:16:39.623 } 00:16:39.623 }, 00:16:39.623 "base_bdevs_list": [ 00:16:39.623 { 00:16:39.623 "name": "spare", 00:16:39.623 "uuid": "b4d97d41-c0e3-5c24-ad5a-ed95397f4083", 00:16:39.623 "is_configured": true, 00:16:39.623 "data_offset": 0, 00:16:39.623 "data_size": 65536 00:16:39.623 }, 00:16:39.623 { 00:16:39.623 "name": "BaseBdev2", 00:16:39.623 "uuid": "60773703-6575-5103-848f-75c106e66c7f", 00:16:39.623 "is_configured": true, 00:16:39.623 "data_offset": 0, 00:16:39.623 "data_size": 65536 00:16:39.623 }, 00:16:39.623 { 00:16:39.623 "name": "BaseBdev3", 00:16:39.623 "uuid": "9f19738e-cc63-55d6-b36d-e5e8f347accc", 00:16:39.623 "is_configured": true, 00:16:39.623 "data_offset": 0, 00:16:39.623 "data_size": 65536 00:16:39.623 }, 00:16:39.623 { 00:16:39.623 "name": "BaseBdev4", 00:16:39.623 "uuid": "47ce7d73-2268-51c7-94fa-a69cc345e200", 00:16:39.623 "is_configured": true, 00:16:39.623 "data_offset": 0, 00:16:39.623 "data_size": 65536 00:16:39.623 } 00:16:39.623 ] 00:16:39.623 }' 00:16:39.623 11:55:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.623 [2024-11-27 11:55:05.646293] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:39.623 [2024-11-27 11:55:05.646422] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:39.623 [2024-11-27 11:55:05.646498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.623 11:55:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.623 11:55:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.623 11:55:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.623 11:55:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.559 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.559 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.559 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.559 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.559 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.559 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.559 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.559 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.559 11:55:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.559 11:55:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.559 11:55:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.559 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.559 "name": "raid_bdev1", 00:16:40.559 "uuid": "3de35909-e30f-4a87-923b-83e30377390d", 00:16:40.559 "strip_size_kb": 64, 00:16:40.559 "state": "online", 00:16:40.559 "raid_level": "raid5f", 00:16:40.559 "superblock": false, 00:16:40.559 "num_base_bdevs": 4, 00:16:40.559 "num_base_bdevs_discovered": 4, 00:16:40.559 "num_base_bdevs_operational": 4, 00:16:40.559 "base_bdevs_list": [ 00:16:40.559 { 00:16:40.559 "name": "spare", 00:16:40.559 "uuid": "b4d97d41-c0e3-5c24-ad5a-ed95397f4083", 00:16:40.559 "is_configured": true, 00:16:40.559 "data_offset": 0, 00:16:40.559 "data_size": 65536 00:16:40.559 }, 00:16:40.559 { 00:16:40.559 "name": "BaseBdev2", 00:16:40.559 "uuid": "60773703-6575-5103-848f-75c106e66c7f", 00:16:40.559 "is_configured": true, 00:16:40.559 "data_offset": 0, 00:16:40.559 "data_size": 65536 00:16:40.559 }, 00:16:40.559 { 00:16:40.559 "name": "BaseBdev3", 00:16:40.559 "uuid": "9f19738e-cc63-55d6-b36d-e5e8f347accc", 00:16:40.559 "is_configured": true, 00:16:40.559 "data_offset": 0, 00:16:40.559 "data_size": 65536 00:16:40.559 }, 00:16:40.559 { 00:16:40.559 "name": "BaseBdev4", 00:16:40.559 "uuid": "47ce7d73-2268-51c7-94fa-a69cc345e200", 00:16:40.559 "is_configured": true, 00:16:40.559 "data_offset": 0, 00:16:40.559 "data_size": 65536 00:16:40.559 } 00:16:40.559 ] 00:16:40.559 }' 00:16:40.559 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.559 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:40.559 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.560 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:40.560 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:40.560 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:40.560 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.560 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:40.560 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:40.560 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.560 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.560 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.560 11:55:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.560 11:55:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.560 11:55:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.560 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.560 "name": "raid_bdev1", 00:16:40.560 "uuid": "3de35909-e30f-4a87-923b-83e30377390d", 00:16:40.560 "strip_size_kb": 64, 00:16:40.560 "state": "online", 00:16:40.560 "raid_level": "raid5f", 00:16:40.560 "superblock": false, 00:16:40.560 "num_base_bdevs": 4, 00:16:40.560 "num_base_bdevs_discovered": 4, 00:16:40.560 "num_base_bdevs_operational": 4, 00:16:40.560 "base_bdevs_list": [ 00:16:40.560 { 00:16:40.560 "name": "spare", 00:16:40.560 "uuid": "b4d97d41-c0e3-5c24-ad5a-ed95397f4083", 00:16:40.560 "is_configured": true, 00:16:40.560 "data_offset": 0, 00:16:40.560 "data_size": 65536 00:16:40.560 }, 00:16:40.560 { 00:16:40.560 "name": "BaseBdev2", 00:16:40.560 "uuid": "60773703-6575-5103-848f-75c106e66c7f", 00:16:40.560 "is_configured": true, 00:16:40.560 "data_offset": 0, 00:16:40.560 "data_size": 65536 00:16:40.560 }, 00:16:40.560 { 00:16:40.560 "name": "BaseBdev3", 00:16:40.560 "uuid": "9f19738e-cc63-55d6-b36d-e5e8f347accc", 00:16:40.560 "is_configured": true, 00:16:40.560 "data_offset": 0, 00:16:40.560 "data_size": 65536 00:16:40.560 }, 00:16:40.560 { 00:16:40.560 "name": "BaseBdev4", 00:16:40.560 "uuid": "47ce7d73-2268-51c7-94fa-a69cc345e200", 00:16:40.560 "is_configured": true, 00:16:40.560 "data_offset": 0, 00:16:40.560 "data_size": 65536 00:16:40.560 } 00:16:40.560 ] 00:16:40.560 }' 00:16:40.560 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.560 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:40.560 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.819 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:40.819 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:40.819 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.819 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.819 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.819 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.819 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.819 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.819 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.819 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.819 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.819 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.819 11:55:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.819 11:55:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.819 11:55:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.819 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.819 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.819 "name": "raid_bdev1", 00:16:40.819 "uuid": "3de35909-e30f-4a87-923b-83e30377390d", 00:16:40.819 "strip_size_kb": 64, 00:16:40.819 "state": "online", 00:16:40.819 "raid_level": "raid5f", 00:16:40.819 "superblock": false, 00:16:40.819 "num_base_bdevs": 4, 00:16:40.819 "num_base_bdevs_discovered": 4, 00:16:40.819 "num_base_bdevs_operational": 4, 00:16:40.819 "base_bdevs_list": [ 00:16:40.819 { 00:16:40.819 "name": "spare", 00:16:40.819 "uuid": "b4d97d41-c0e3-5c24-ad5a-ed95397f4083", 00:16:40.819 "is_configured": true, 00:16:40.819 "data_offset": 0, 00:16:40.819 "data_size": 65536 00:16:40.819 }, 00:16:40.819 { 00:16:40.819 "name": "BaseBdev2", 00:16:40.819 "uuid": "60773703-6575-5103-848f-75c106e66c7f", 00:16:40.819 "is_configured": true, 00:16:40.819 "data_offset": 0, 00:16:40.819 "data_size": 65536 00:16:40.819 }, 00:16:40.819 { 00:16:40.819 "name": "BaseBdev3", 00:16:40.819 "uuid": "9f19738e-cc63-55d6-b36d-e5e8f347accc", 00:16:40.819 "is_configured": true, 00:16:40.819 "data_offset": 0, 00:16:40.819 "data_size": 65536 00:16:40.819 }, 00:16:40.819 { 00:16:40.819 "name": "BaseBdev4", 00:16:40.819 "uuid": "47ce7d73-2268-51c7-94fa-a69cc345e200", 00:16:40.819 "is_configured": true, 00:16:40.819 "data_offset": 0, 00:16:40.819 "data_size": 65536 00:16:40.819 } 00:16:40.819 ] 00:16:40.819 }' 00:16:40.819 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.819 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.077 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:41.077 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.077 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.077 [2024-11-27 11:55:07.440838] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.077 [2024-11-27 11:55:07.440940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.077 [2024-11-27 11:55:07.441070] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.077 [2024-11-27 11:55:07.441198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.077 [2024-11-27 11:55:07.441251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:41.077 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.077 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:41.077 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.077 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.077 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:41.335 /dev/nbd0 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:41.335 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:41.594 1+0 records in 00:16:41.594 1+0 records out 00:16:41.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233155 s, 17.6 MB/s 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:41.594 /dev/nbd1 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:41.594 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:41.854 1+0 records in 00:16:41.854 1+0 records out 00:16:41.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440772 s, 9.3 MB/s 00:16:41.854 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.854 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:41.854 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.854 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:41.854 11:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:41.854 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:41.854 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:41.854 11:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:41.854 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:41.854 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:41.854 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:41.854 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:41.854 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:41.854 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.854 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:42.164 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:42.164 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:42.164 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:42.164 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:42.164 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:42.164 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:42.164 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:42.164 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:42.164 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:42.164 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84672 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84672 ']' 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84672 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84672 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84672' 00:16:42.455 killing process with pid 84672 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84672 00:16:42.455 Received shutdown signal, test time was about 60.000000 seconds 00:16:42.455 00:16:42.455 Latency(us) 00:16:42.455 [2024-11-27T11:55:08.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.455 [2024-11-27T11:55:08.840Z] =================================================================================================================== 00:16:42.455 [2024-11-27T11:55:08.840Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:42.455 [2024-11-27 11:55:08.645893] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:42.455 11:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84672 00:16:43.023 [2024-11-27 11:55:09.141557] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:43.980 00:16:43.980 real 0m20.030s 00:16:43.980 user 0m23.985s 00:16:43.980 sys 0m2.175s 00:16:43.980 ************************************ 00:16:43.980 END TEST raid5f_rebuild_test 00:16:43.980 ************************************ 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.980 11:55:10 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:43.980 11:55:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:43.980 11:55:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.980 11:55:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.980 ************************************ 00:16:43.980 START TEST raid5f_rebuild_test_sb 00:16:43.980 ************************************ 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:43.980 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85194 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85194 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85194 ']' 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.981 11:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.240 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:44.240 Zero copy mechanism will not be used. 00:16:44.240 [2024-11-27 11:55:10.416668] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:16:44.240 [2024-11-27 11:55:10.416800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85194 ] 00:16:44.240 [2024-11-27 11:55:10.591953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.499 [2024-11-27 11:55:10.703832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.758 [2024-11-27 11:55:10.899074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.758 [2024-11-27 11:55:10.899133] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.017 BaseBdev1_malloc 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.017 [2024-11-27 11:55:11.298998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:45.017 [2024-11-27 11:55:11.299112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.017 [2024-11-27 11:55:11.299172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:45.017 [2024-11-27 11:55:11.299208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.017 [2024-11-27 11:55:11.301422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.017 [2024-11-27 11:55:11.301497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:45.017 BaseBdev1 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.017 BaseBdev2_malloc 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.017 [2024-11-27 11:55:11.356035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:45.017 [2024-11-27 11:55:11.356159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.017 [2024-11-27 11:55:11.356190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:45.017 [2024-11-27 11:55:11.356202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.017 [2024-11-27 11:55:11.358407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.017 [2024-11-27 11:55:11.358446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:45.017 BaseBdev2 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.017 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:45.018 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.018 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.278 BaseBdev3_malloc 00:16:45.278 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.278 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:45.278 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.278 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.278 [2024-11-27 11:55:11.424495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:45.278 [2024-11-27 11:55:11.424554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.278 [2024-11-27 11:55:11.424577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:45.278 [2024-11-27 11:55:11.424588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.278 [2024-11-27 11:55:11.426735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.278 [2024-11-27 11:55:11.426812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:45.278 BaseBdev3 00:16:45.278 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.278 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.278 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:45.278 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.278 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.278 BaseBdev4_malloc 00:16:45.278 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.279 [2024-11-27 11:55:11.480073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:45.279 [2024-11-27 11:55:11.480132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.279 [2024-11-27 11:55:11.480153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:45.279 [2024-11-27 11:55:11.480162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.279 [2024-11-27 11:55:11.482177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.279 [2024-11-27 11:55:11.482218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:45.279 BaseBdev4 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.279 spare_malloc 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.279 spare_delay 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.279 [2024-11-27 11:55:11.545303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:45.279 [2024-11-27 11:55:11.545355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.279 [2024-11-27 11:55:11.545373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:45.279 [2024-11-27 11:55:11.545383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.279 [2024-11-27 11:55:11.547356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.279 [2024-11-27 11:55:11.547396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:45.279 spare 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.279 [2024-11-27 11:55:11.557349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.279 [2024-11-27 11:55:11.559182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.279 [2024-11-27 11:55:11.559238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:45.279 [2024-11-27 11:55:11.559287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:45.279 [2024-11-27 11:55:11.559471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:45.279 [2024-11-27 11:55:11.559506] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:45.279 [2024-11-27 11:55:11.559766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:45.279 [2024-11-27 11:55:11.567756] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:45.279 [2024-11-27 11:55:11.567819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:45.279 [2024-11-27 11:55:11.568128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.279 "name": "raid_bdev1", 00:16:45.279 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:16:45.279 "strip_size_kb": 64, 00:16:45.279 "state": "online", 00:16:45.279 "raid_level": "raid5f", 00:16:45.279 "superblock": true, 00:16:45.279 "num_base_bdevs": 4, 00:16:45.279 "num_base_bdevs_discovered": 4, 00:16:45.279 "num_base_bdevs_operational": 4, 00:16:45.279 "base_bdevs_list": [ 00:16:45.279 { 00:16:45.279 "name": "BaseBdev1", 00:16:45.279 "uuid": "5eb3f6e6-868a-584a-8512-89071108e48e", 00:16:45.279 "is_configured": true, 00:16:45.279 "data_offset": 2048, 00:16:45.279 "data_size": 63488 00:16:45.279 }, 00:16:45.279 { 00:16:45.279 "name": "BaseBdev2", 00:16:45.279 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:16:45.279 "is_configured": true, 00:16:45.279 "data_offset": 2048, 00:16:45.279 "data_size": 63488 00:16:45.279 }, 00:16:45.279 { 00:16:45.279 "name": "BaseBdev3", 00:16:45.279 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:16:45.279 "is_configured": true, 00:16:45.279 "data_offset": 2048, 00:16:45.279 "data_size": 63488 00:16:45.279 }, 00:16:45.279 { 00:16:45.279 "name": "BaseBdev4", 00:16:45.279 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:16:45.279 "is_configured": true, 00:16:45.279 "data_offset": 2048, 00:16:45.279 "data_size": 63488 00:16:45.279 } 00:16:45.279 ] 00:16:45.279 }' 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.279 11:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.850 [2024-11-27 11:55:12.036214] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:45.850 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:46.109 [2024-11-27 11:55:12.303583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:46.109 /dev/nbd0 00:16:46.109 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:46.109 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:46.109 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:46.109 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:46.109 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:46.109 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:46.109 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:46.109 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:46.109 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:46.109 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:46.109 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:46.109 1+0 records in 00:16:46.109 1+0 records out 00:16:46.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475354 s, 8.6 MB/s 00:16:46.110 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.110 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:46.110 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.110 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:46.110 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:46.110 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:46.110 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:46.110 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:46.110 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:46.110 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:46.110 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:46.679 496+0 records in 00:16:46.679 496+0 records out 00:16:46.679 97517568 bytes (98 MB, 93 MiB) copied, 0.463672 s, 210 MB/s 00:16:46.679 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:46.679 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:46.679 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:46.679 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:46.679 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:46.679 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.679 11:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:46.679 [2024-11-27 11:55:13.041376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.679 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:46.938 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:46.938 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:46.938 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.938 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.938 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:46.938 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:46.938 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.938 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:46.938 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.938 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.938 [2024-11-27 11:55:13.080114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:46.938 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.938 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:46.939 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.939 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.939 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.939 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.939 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.939 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.939 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.939 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.939 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.939 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.939 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.939 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.939 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.939 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.939 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.939 "name": "raid_bdev1", 00:16:46.939 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:16:46.939 "strip_size_kb": 64, 00:16:46.939 "state": "online", 00:16:46.939 "raid_level": "raid5f", 00:16:46.939 "superblock": true, 00:16:46.939 "num_base_bdevs": 4, 00:16:46.939 "num_base_bdevs_discovered": 3, 00:16:46.939 "num_base_bdevs_operational": 3, 00:16:46.939 "base_bdevs_list": [ 00:16:46.939 { 00:16:46.939 "name": null, 00:16:46.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.939 "is_configured": false, 00:16:46.939 "data_offset": 0, 00:16:46.939 "data_size": 63488 00:16:46.939 }, 00:16:46.939 { 00:16:46.939 "name": "BaseBdev2", 00:16:46.939 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:16:46.939 "is_configured": true, 00:16:46.939 "data_offset": 2048, 00:16:46.939 "data_size": 63488 00:16:46.939 }, 00:16:46.939 { 00:16:46.939 "name": "BaseBdev3", 00:16:46.939 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:16:46.939 "is_configured": true, 00:16:46.939 "data_offset": 2048, 00:16:46.939 "data_size": 63488 00:16:46.939 }, 00:16:46.939 { 00:16:46.939 "name": "BaseBdev4", 00:16:46.939 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:16:46.939 "is_configured": true, 00:16:46.939 "data_offset": 2048, 00:16:46.939 "data_size": 63488 00:16:46.939 } 00:16:46.939 ] 00:16:46.939 }' 00:16:46.939 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.939 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.199 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:47.199 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.199 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.199 [2024-11-27 11:55:13.515353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:47.199 [2024-11-27 11:55:13.531683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:47.199 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.199 11:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:47.199 [2024-11-27 11:55:13.541829] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.578 "name": "raid_bdev1", 00:16:48.578 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:16:48.578 "strip_size_kb": 64, 00:16:48.578 "state": "online", 00:16:48.578 "raid_level": "raid5f", 00:16:48.578 "superblock": true, 00:16:48.578 "num_base_bdevs": 4, 00:16:48.578 "num_base_bdevs_discovered": 4, 00:16:48.578 "num_base_bdevs_operational": 4, 00:16:48.578 "process": { 00:16:48.578 "type": "rebuild", 00:16:48.578 "target": "spare", 00:16:48.578 "progress": { 00:16:48.578 "blocks": 19200, 00:16:48.578 "percent": 10 00:16:48.578 } 00:16:48.578 }, 00:16:48.578 "base_bdevs_list": [ 00:16:48.578 { 00:16:48.578 "name": "spare", 00:16:48.578 "uuid": "e5407834-330c-5961-b4a6-aaee2e28e664", 00:16:48.578 "is_configured": true, 00:16:48.578 "data_offset": 2048, 00:16:48.578 "data_size": 63488 00:16:48.578 }, 00:16:48.578 { 00:16:48.578 "name": "BaseBdev2", 00:16:48.578 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:16:48.578 "is_configured": true, 00:16:48.578 "data_offset": 2048, 00:16:48.578 "data_size": 63488 00:16:48.578 }, 00:16:48.578 { 00:16:48.578 "name": "BaseBdev3", 00:16:48.578 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:16:48.578 "is_configured": true, 00:16:48.578 "data_offset": 2048, 00:16:48.578 "data_size": 63488 00:16:48.578 }, 00:16:48.578 { 00:16:48.578 "name": "BaseBdev4", 00:16:48.578 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:16:48.578 "is_configured": true, 00:16:48.578 "data_offset": 2048, 00:16:48.578 "data_size": 63488 00:16:48.578 } 00:16:48.578 ] 00:16:48.578 }' 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.578 [2024-11-27 11:55:14.700932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:48.578 [2024-11-27 11:55:14.749829] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:48.578 [2024-11-27 11:55:14.749968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.578 [2024-11-27 11:55:14.749987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:48.578 [2024-11-27 11:55:14.749997] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.578 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.578 "name": "raid_bdev1", 00:16:48.578 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:16:48.578 "strip_size_kb": 64, 00:16:48.578 "state": "online", 00:16:48.578 "raid_level": "raid5f", 00:16:48.578 "superblock": true, 00:16:48.578 "num_base_bdevs": 4, 00:16:48.578 "num_base_bdevs_discovered": 3, 00:16:48.578 "num_base_bdevs_operational": 3, 00:16:48.578 "base_bdevs_list": [ 00:16:48.578 { 00:16:48.578 "name": null, 00:16:48.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.578 "is_configured": false, 00:16:48.578 "data_offset": 0, 00:16:48.579 "data_size": 63488 00:16:48.579 }, 00:16:48.579 { 00:16:48.579 "name": "BaseBdev2", 00:16:48.579 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:16:48.579 "is_configured": true, 00:16:48.579 "data_offset": 2048, 00:16:48.579 "data_size": 63488 00:16:48.579 }, 00:16:48.579 { 00:16:48.579 "name": "BaseBdev3", 00:16:48.579 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:16:48.579 "is_configured": true, 00:16:48.579 "data_offset": 2048, 00:16:48.579 "data_size": 63488 00:16:48.579 }, 00:16:48.579 { 00:16:48.579 "name": "BaseBdev4", 00:16:48.579 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:16:48.579 "is_configured": true, 00:16:48.579 "data_offset": 2048, 00:16:48.579 "data_size": 63488 00:16:48.579 } 00:16:48.579 ] 00:16:48.579 }' 00:16:48.579 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.579 11:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.148 "name": "raid_bdev1", 00:16:49.148 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:16:49.148 "strip_size_kb": 64, 00:16:49.148 "state": "online", 00:16:49.148 "raid_level": "raid5f", 00:16:49.148 "superblock": true, 00:16:49.148 "num_base_bdevs": 4, 00:16:49.148 "num_base_bdevs_discovered": 3, 00:16:49.148 "num_base_bdevs_operational": 3, 00:16:49.148 "base_bdevs_list": [ 00:16:49.148 { 00:16:49.148 "name": null, 00:16:49.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.148 "is_configured": false, 00:16:49.148 "data_offset": 0, 00:16:49.148 "data_size": 63488 00:16:49.148 }, 00:16:49.148 { 00:16:49.148 "name": "BaseBdev2", 00:16:49.148 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:16:49.148 "is_configured": true, 00:16:49.148 "data_offset": 2048, 00:16:49.148 "data_size": 63488 00:16:49.148 }, 00:16:49.148 { 00:16:49.148 "name": "BaseBdev3", 00:16:49.148 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:16:49.148 "is_configured": true, 00:16:49.148 "data_offset": 2048, 00:16:49.148 "data_size": 63488 00:16:49.148 }, 00:16:49.148 { 00:16:49.148 "name": "BaseBdev4", 00:16:49.148 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:16:49.148 "is_configured": true, 00:16:49.148 "data_offset": 2048, 00:16:49.148 "data_size": 63488 00:16:49.148 } 00:16:49.148 ] 00:16:49.148 }' 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.148 [2024-11-27 11:55:15.385249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:49.148 [2024-11-27 11:55:15.400435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.148 11:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:49.148 [2024-11-27 11:55:15.409909] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:50.112 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.112 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.112 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.112 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.112 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.112 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.112 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.112 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.113 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.113 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.113 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.113 "name": "raid_bdev1", 00:16:50.113 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:16:50.113 "strip_size_kb": 64, 00:16:50.113 "state": "online", 00:16:50.113 "raid_level": "raid5f", 00:16:50.113 "superblock": true, 00:16:50.113 "num_base_bdevs": 4, 00:16:50.113 "num_base_bdevs_discovered": 4, 00:16:50.113 "num_base_bdevs_operational": 4, 00:16:50.113 "process": { 00:16:50.113 "type": "rebuild", 00:16:50.113 "target": "spare", 00:16:50.113 "progress": { 00:16:50.113 "blocks": 17280, 00:16:50.113 "percent": 9 00:16:50.113 } 00:16:50.113 }, 00:16:50.113 "base_bdevs_list": [ 00:16:50.113 { 00:16:50.113 "name": "spare", 00:16:50.113 "uuid": "e5407834-330c-5961-b4a6-aaee2e28e664", 00:16:50.113 "is_configured": true, 00:16:50.113 "data_offset": 2048, 00:16:50.113 "data_size": 63488 00:16:50.113 }, 00:16:50.113 { 00:16:50.113 "name": "BaseBdev2", 00:16:50.113 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:16:50.113 "is_configured": true, 00:16:50.113 "data_offset": 2048, 00:16:50.113 "data_size": 63488 00:16:50.113 }, 00:16:50.113 { 00:16:50.113 "name": "BaseBdev3", 00:16:50.113 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:16:50.113 "is_configured": true, 00:16:50.113 "data_offset": 2048, 00:16:50.113 "data_size": 63488 00:16:50.113 }, 00:16:50.113 { 00:16:50.113 "name": "BaseBdev4", 00:16:50.113 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:16:50.113 "is_configured": true, 00:16:50.113 "data_offset": 2048, 00:16:50.113 "data_size": 63488 00:16:50.113 } 00:16:50.113 ] 00:16:50.113 }' 00:16:50.113 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:50.373 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=651 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.373 "name": "raid_bdev1", 00:16:50.373 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:16:50.373 "strip_size_kb": 64, 00:16:50.373 "state": "online", 00:16:50.373 "raid_level": "raid5f", 00:16:50.373 "superblock": true, 00:16:50.373 "num_base_bdevs": 4, 00:16:50.373 "num_base_bdevs_discovered": 4, 00:16:50.373 "num_base_bdevs_operational": 4, 00:16:50.373 "process": { 00:16:50.373 "type": "rebuild", 00:16:50.373 "target": "spare", 00:16:50.373 "progress": { 00:16:50.373 "blocks": 21120, 00:16:50.373 "percent": 11 00:16:50.373 } 00:16:50.373 }, 00:16:50.373 "base_bdevs_list": [ 00:16:50.373 { 00:16:50.373 "name": "spare", 00:16:50.373 "uuid": "e5407834-330c-5961-b4a6-aaee2e28e664", 00:16:50.373 "is_configured": true, 00:16:50.373 "data_offset": 2048, 00:16:50.373 "data_size": 63488 00:16:50.373 }, 00:16:50.373 { 00:16:50.373 "name": "BaseBdev2", 00:16:50.373 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:16:50.373 "is_configured": true, 00:16:50.373 "data_offset": 2048, 00:16:50.373 "data_size": 63488 00:16:50.373 }, 00:16:50.373 { 00:16:50.373 "name": "BaseBdev3", 00:16:50.373 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:16:50.373 "is_configured": true, 00:16:50.373 "data_offset": 2048, 00:16:50.373 "data_size": 63488 00:16:50.373 }, 00:16:50.373 { 00:16:50.373 "name": "BaseBdev4", 00:16:50.373 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:16:50.373 "is_configured": true, 00:16:50.373 "data_offset": 2048, 00:16:50.373 "data_size": 63488 00:16:50.373 } 00:16:50.373 ] 00:16:50.373 }' 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.373 11:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:51.313 11:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:51.313 11:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.313 11:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.313 11:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.313 11:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.313 11:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.573 11:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.573 11:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.573 11:55:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.573 11:55:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.573 11:55:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.573 11:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.573 "name": "raid_bdev1", 00:16:51.573 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:16:51.573 "strip_size_kb": 64, 00:16:51.573 "state": "online", 00:16:51.573 "raid_level": "raid5f", 00:16:51.573 "superblock": true, 00:16:51.573 "num_base_bdevs": 4, 00:16:51.573 "num_base_bdevs_discovered": 4, 00:16:51.573 "num_base_bdevs_operational": 4, 00:16:51.573 "process": { 00:16:51.573 "type": "rebuild", 00:16:51.573 "target": "spare", 00:16:51.573 "progress": { 00:16:51.573 "blocks": 42240, 00:16:51.573 "percent": 22 00:16:51.573 } 00:16:51.573 }, 00:16:51.573 "base_bdevs_list": [ 00:16:51.573 { 00:16:51.573 "name": "spare", 00:16:51.573 "uuid": "e5407834-330c-5961-b4a6-aaee2e28e664", 00:16:51.573 "is_configured": true, 00:16:51.573 "data_offset": 2048, 00:16:51.573 "data_size": 63488 00:16:51.573 }, 00:16:51.573 { 00:16:51.573 "name": "BaseBdev2", 00:16:51.573 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:16:51.573 "is_configured": true, 00:16:51.573 "data_offset": 2048, 00:16:51.573 "data_size": 63488 00:16:51.573 }, 00:16:51.573 { 00:16:51.573 "name": "BaseBdev3", 00:16:51.573 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:16:51.573 "is_configured": true, 00:16:51.573 "data_offset": 2048, 00:16:51.573 "data_size": 63488 00:16:51.573 }, 00:16:51.573 { 00:16:51.573 "name": "BaseBdev4", 00:16:51.573 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:16:51.573 "is_configured": true, 00:16:51.573 "data_offset": 2048, 00:16:51.573 "data_size": 63488 00:16:51.573 } 00:16:51.573 ] 00:16:51.573 }' 00:16:51.573 11:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.573 11:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.573 11:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.573 11:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.573 11:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:52.513 11:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.513 11:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.514 11:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.514 11:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.514 11:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.514 11:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.514 11:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.514 11:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.514 11:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.514 11:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.514 11:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.514 11:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.514 "name": "raid_bdev1", 00:16:52.514 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:16:52.514 "strip_size_kb": 64, 00:16:52.514 "state": "online", 00:16:52.514 "raid_level": "raid5f", 00:16:52.514 "superblock": true, 00:16:52.514 "num_base_bdevs": 4, 00:16:52.514 "num_base_bdevs_discovered": 4, 00:16:52.514 "num_base_bdevs_operational": 4, 00:16:52.514 "process": { 00:16:52.514 "type": "rebuild", 00:16:52.514 "target": "spare", 00:16:52.514 "progress": { 00:16:52.514 "blocks": 63360, 00:16:52.514 "percent": 33 00:16:52.514 } 00:16:52.514 }, 00:16:52.514 "base_bdevs_list": [ 00:16:52.514 { 00:16:52.514 "name": "spare", 00:16:52.514 "uuid": "e5407834-330c-5961-b4a6-aaee2e28e664", 00:16:52.514 "is_configured": true, 00:16:52.514 "data_offset": 2048, 00:16:52.514 "data_size": 63488 00:16:52.514 }, 00:16:52.514 { 00:16:52.514 "name": "BaseBdev2", 00:16:52.514 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:16:52.514 "is_configured": true, 00:16:52.514 "data_offset": 2048, 00:16:52.514 "data_size": 63488 00:16:52.514 }, 00:16:52.514 { 00:16:52.514 "name": "BaseBdev3", 00:16:52.514 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:16:52.514 "is_configured": true, 00:16:52.514 "data_offset": 2048, 00:16:52.514 "data_size": 63488 00:16:52.514 }, 00:16:52.514 { 00:16:52.514 "name": "BaseBdev4", 00:16:52.514 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:16:52.514 "is_configured": true, 00:16:52.514 "data_offset": 2048, 00:16:52.514 "data_size": 63488 00:16:52.514 } 00:16:52.514 ] 00:16:52.514 }' 00:16:52.514 11:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.774 11:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.774 11:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.774 11:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.774 11:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:53.778 11:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:53.778 11:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.778 11:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.778 11:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.778 11:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.778 11:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.778 11:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.778 11:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.778 11:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.778 11:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.778 11:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.778 11:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.778 "name": "raid_bdev1", 00:16:53.778 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:16:53.778 "strip_size_kb": 64, 00:16:53.778 "state": "online", 00:16:53.778 "raid_level": "raid5f", 00:16:53.778 "superblock": true, 00:16:53.778 "num_base_bdevs": 4, 00:16:53.778 "num_base_bdevs_discovered": 4, 00:16:53.778 "num_base_bdevs_operational": 4, 00:16:53.778 "process": { 00:16:53.778 "type": "rebuild", 00:16:53.778 "target": "spare", 00:16:53.778 "progress": { 00:16:53.778 "blocks": 86400, 00:16:53.778 "percent": 45 00:16:53.778 } 00:16:53.778 }, 00:16:53.778 "base_bdevs_list": [ 00:16:53.778 { 00:16:53.778 "name": "spare", 00:16:53.778 "uuid": "e5407834-330c-5961-b4a6-aaee2e28e664", 00:16:53.778 "is_configured": true, 00:16:53.778 "data_offset": 2048, 00:16:53.778 "data_size": 63488 00:16:53.778 }, 00:16:53.778 { 00:16:53.778 "name": "BaseBdev2", 00:16:53.778 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:16:53.778 "is_configured": true, 00:16:53.778 "data_offset": 2048, 00:16:53.778 "data_size": 63488 00:16:53.778 }, 00:16:53.778 { 00:16:53.778 "name": "BaseBdev3", 00:16:53.778 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:16:53.778 "is_configured": true, 00:16:53.778 "data_offset": 2048, 00:16:53.778 "data_size": 63488 00:16:53.778 }, 00:16:53.778 { 00:16:53.778 "name": "BaseBdev4", 00:16:53.778 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:16:53.778 "is_configured": true, 00:16:53.778 "data_offset": 2048, 00:16:53.778 "data_size": 63488 00:16:53.778 } 00:16:53.778 ] 00:16:53.778 }' 00:16:53.778 11:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.778 11:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.778 11:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.778 11:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.778 11:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.159 11:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.159 11:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.159 11:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.159 11:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.159 11:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.159 11:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.159 11:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.159 11:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.159 11:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.159 11:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.159 11:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.159 11:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.159 "name": "raid_bdev1", 00:16:55.159 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:16:55.159 "strip_size_kb": 64, 00:16:55.159 "state": "online", 00:16:55.159 "raid_level": "raid5f", 00:16:55.159 "superblock": true, 00:16:55.159 "num_base_bdevs": 4, 00:16:55.159 "num_base_bdevs_discovered": 4, 00:16:55.159 "num_base_bdevs_operational": 4, 00:16:55.159 "process": { 00:16:55.159 "type": "rebuild", 00:16:55.159 "target": "spare", 00:16:55.159 "progress": { 00:16:55.159 "blocks": 107520, 00:16:55.159 "percent": 56 00:16:55.159 } 00:16:55.159 }, 00:16:55.159 "base_bdevs_list": [ 00:16:55.159 { 00:16:55.159 "name": "spare", 00:16:55.159 "uuid": "e5407834-330c-5961-b4a6-aaee2e28e664", 00:16:55.159 "is_configured": true, 00:16:55.159 "data_offset": 2048, 00:16:55.159 "data_size": 63488 00:16:55.159 }, 00:16:55.159 { 00:16:55.159 "name": "BaseBdev2", 00:16:55.159 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:16:55.159 "is_configured": true, 00:16:55.159 "data_offset": 2048, 00:16:55.159 "data_size": 63488 00:16:55.159 }, 00:16:55.159 { 00:16:55.159 "name": "BaseBdev3", 00:16:55.159 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:16:55.159 "is_configured": true, 00:16:55.159 "data_offset": 2048, 00:16:55.159 "data_size": 63488 00:16:55.159 }, 00:16:55.159 { 00:16:55.159 "name": "BaseBdev4", 00:16:55.159 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:16:55.159 "is_configured": true, 00:16:55.159 "data_offset": 2048, 00:16:55.159 "data_size": 63488 00:16:55.159 } 00:16:55.159 ] 00:16:55.159 }' 00:16:55.159 11:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.159 11:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.159 11:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.159 11:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.159 11:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:56.160 11:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:56.160 11:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.160 11:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.160 11:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.160 11:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.160 11:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.160 11:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.160 11:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.160 11:55:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.160 11:55:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.160 11:55:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.160 11:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.160 "name": "raid_bdev1", 00:16:56.160 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:16:56.160 "strip_size_kb": 64, 00:16:56.160 "state": "online", 00:16:56.160 "raid_level": "raid5f", 00:16:56.160 "superblock": true, 00:16:56.160 "num_base_bdevs": 4, 00:16:56.160 "num_base_bdevs_discovered": 4, 00:16:56.160 "num_base_bdevs_operational": 4, 00:16:56.161 "process": { 00:16:56.161 "type": "rebuild", 00:16:56.161 "target": "spare", 00:16:56.161 "progress": { 00:16:56.161 "blocks": 128640, 00:16:56.161 "percent": 67 00:16:56.161 } 00:16:56.161 }, 00:16:56.161 "base_bdevs_list": [ 00:16:56.161 { 00:16:56.161 "name": "spare", 00:16:56.161 "uuid": "e5407834-330c-5961-b4a6-aaee2e28e664", 00:16:56.161 "is_configured": true, 00:16:56.161 "data_offset": 2048, 00:16:56.161 "data_size": 63488 00:16:56.161 }, 00:16:56.161 { 00:16:56.161 "name": "BaseBdev2", 00:16:56.161 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:16:56.161 "is_configured": true, 00:16:56.161 "data_offset": 2048, 00:16:56.161 "data_size": 63488 00:16:56.161 }, 00:16:56.161 { 00:16:56.161 "name": "BaseBdev3", 00:16:56.161 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:16:56.161 "is_configured": true, 00:16:56.161 "data_offset": 2048, 00:16:56.161 "data_size": 63488 00:16:56.161 }, 00:16:56.161 { 00:16:56.161 "name": "BaseBdev4", 00:16:56.161 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:16:56.161 "is_configured": true, 00:16:56.161 "data_offset": 2048, 00:16:56.161 "data_size": 63488 00:16:56.161 } 00:16:56.161 ] 00:16:56.161 }' 00:16:56.161 11:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.161 11:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.161 11:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.161 11:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.161 11:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:57.097 11:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.097 11:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.097 11:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.097 11:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.097 11:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.097 11:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.097 11:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.097 11:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.097 11:55:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.097 11:55:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.097 11:55:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.097 11:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.097 "name": "raid_bdev1", 00:16:57.097 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:16:57.097 "strip_size_kb": 64, 00:16:57.097 "state": "online", 00:16:57.097 "raid_level": "raid5f", 00:16:57.097 "superblock": true, 00:16:57.097 "num_base_bdevs": 4, 00:16:57.097 "num_base_bdevs_discovered": 4, 00:16:57.097 "num_base_bdevs_operational": 4, 00:16:57.097 "process": { 00:16:57.097 "type": "rebuild", 00:16:57.097 "target": "spare", 00:16:57.097 "progress": { 00:16:57.097 "blocks": 151680, 00:16:57.097 "percent": 79 00:16:57.097 } 00:16:57.097 }, 00:16:57.097 "base_bdevs_list": [ 00:16:57.097 { 00:16:57.097 "name": "spare", 00:16:57.097 "uuid": "e5407834-330c-5961-b4a6-aaee2e28e664", 00:16:57.097 "is_configured": true, 00:16:57.097 "data_offset": 2048, 00:16:57.097 "data_size": 63488 00:16:57.097 }, 00:16:57.097 { 00:16:57.097 "name": "BaseBdev2", 00:16:57.097 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:16:57.097 "is_configured": true, 00:16:57.097 "data_offset": 2048, 00:16:57.097 "data_size": 63488 00:16:57.097 }, 00:16:57.097 { 00:16:57.097 "name": "BaseBdev3", 00:16:57.097 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:16:57.097 "is_configured": true, 00:16:57.097 "data_offset": 2048, 00:16:57.097 "data_size": 63488 00:16:57.097 }, 00:16:57.097 { 00:16:57.097 "name": "BaseBdev4", 00:16:57.097 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:16:57.097 "is_configured": true, 00:16:57.097 "data_offset": 2048, 00:16:57.097 "data_size": 63488 00:16:57.097 } 00:16:57.097 ] 00:16:57.097 }' 00:16:57.097 11:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.355 11:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.355 11:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.355 11:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.355 11:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.293 11:55:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.293 11:55:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.293 11:55:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.293 11:55:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.293 11:55:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.293 11:55:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.293 11:55:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.293 11:55:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.293 11:55:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.293 11:55:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.293 11:55:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.293 11:55:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.293 "name": "raid_bdev1", 00:16:58.293 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:16:58.293 "strip_size_kb": 64, 00:16:58.293 "state": "online", 00:16:58.293 "raid_level": "raid5f", 00:16:58.293 "superblock": true, 00:16:58.293 "num_base_bdevs": 4, 00:16:58.293 "num_base_bdevs_discovered": 4, 00:16:58.293 "num_base_bdevs_operational": 4, 00:16:58.293 "process": { 00:16:58.293 "type": "rebuild", 00:16:58.293 "target": "spare", 00:16:58.293 "progress": { 00:16:58.293 "blocks": 172800, 00:16:58.293 "percent": 90 00:16:58.293 } 00:16:58.293 }, 00:16:58.293 "base_bdevs_list": [ 00:16:58.293 { 00:16:58.293 "name": "spare", 00:16:58.293 "uuid": "e5407834-330c-5961-b4a6-aaee2e28e664", 00:16:58.293 "is_configured": true, 00:16:58.293 "data_offset": 2048, 00:16:58.293 "data_size": 63488 00:16:58.293 }, 00:16:58.293 { 00:16:58.293 "name": "BaseBdev2", 00:16:58.293 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:16:58.293 "is_configured": true, 00:16:58.293 "data_offset": 2048, 00:16:58.293 "data_size": 63488 00:16:58.293 }, 00:16:58.293 { 00:16:58.293 "name": "BaseBdev3", 00:16:58.293 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:16:58.293 "is_configured": true, 00:16:58.293 "data_offset": 2048, 00:16:58.293 "data_size": 63488 00:16:58.293 }, 00:16:58.293 { 00:16:58.293 "name": "BaseBdev4", 00:16:58.293 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:16:58.293 "is_configured": true, 00:16:58.293 "data_offset": 2048, 00:16:58.293 "data_size": 63488 00:16:58.293 } 00:16:58.293 ] 00:16:58.293 }' 00:16:58.293 11:55:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.293 11:55:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.293 11:55:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.553 11:55:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.553 11:55:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.123 [2024-11-27 11:55:25.475999] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:59.123 [2024-11-27 11:55:25.476161] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:59.123 [2024-11-27 11:55:25.476315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.384 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.384 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.384 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.384 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.384 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.384 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.384 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.384 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.384 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.384 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.384 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.384 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.384 "name": "raid_bdev1", 00:16:59.384 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:16:59.384 "strip_size_kb": 64, 00:16:59.384 "state": "online", 00:16:59.384 "raid_level": "raid5f", 00:16:59.384 "superblock": true, 00:16:59.384 "num_base_bdevs": 4, 00:16:59.384 "num_base_bdevs_discovered": 4, 00:16:59.384 "num_base_bdevs_operational": 4, 00:16:59.384 "base_bdevs_list": [ 00:16:59.384 { 00:16:59.384 "name": "spare", 00:16:59.384 "uuid": "e5407834-330c-5961-b4a6-aaee2e28e664", 00:16:59.384 "is_configured": true, 00:16:59.384 "data_offset": 2048, 00:16:59.384 "data_size": 63488 00:16:59.384 }, 00:16:59.384 { 00:16:59.384 "name": "BaseBdev2", 00:16:59.384 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:16:59.384 "is_configured": true, 00:16:59.384 "data_offset": 2048, 00:16:59.384 "data_size": 63488 00:16:59.384 }, 00:16:59.384 { 00:16:59.384 "name": "BaseBdev3", 00:16:59.384 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:16:59.384 "is_configured": true, 00:16:59.384 "data_offset": 2048, 00:16:59.384 "data_size": 63488 00:16:59.384 }, 00:16:59.384 { 00:16:59.384 "name": "BaseBdev4", 00:16:59.384 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:16:59.384 "is_configured": true, 00:16:59.384 "data_offset": 2048, 00:16:59.384 "data_size": 63488 00:16:59.384 } 00:16:59.384 ] 00:16:59.384 }' 00:16:59.384 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.644 "name": "raid_bdev1", 00:16:59.644 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:16:59.644 "strip_size_kb": 64, 00:16:59.644 "state": "online", 00:16:59.644 "raid_level": "raid5f", 00:16:59.644 "superblock": true, 00:16:59.644 "num_base_bdevs": 4, 00:16:59.644 "num_base_bdevs_discovered": 4, 00:16:59.644 "num_base_bdevs_operational": 4, 00:16:59.644 "base_bdevs_list": [ 00:16:59.644 { 00:16:59.644 "name": "spare", 00:16:59.644 "uuid": "e5407834-330c-5961-b4a6-aaee2e28e664", 00:16:59.644 "is_configured": true, 00:16:59.644 "data_offset": 2048, 00:16:59.644 "data_size": 63488 00:16:59.644 }, 00:16:59.644 { 00:16:59.644 "name": "BaseBdev2", 00:16:59.644 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:16:59.644 "is_configured": true, 00:16:59.644 "data_offset": 2048, 00:16:59.644 "data_size": 63488 00:16:59.644 }, 00:16:59.644 { 00:16:59.644 "name": "BaseBdev3", 00:16:59.644 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:16:59.644 "is_configured": true, 00:16:59.644 "data_offset": 2048, 00:16:59.644 "data_size": 63488 00:16:59.644 }, 00:16:59.644 { 00:16:59.644 "name": "BaseBdev4", 00:16:59.644 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:16:59.644 "is_configured": true, 00:16:59.644 "data_offset": 2048, 00:16:59.644 "data_size": 63488 00:16:59.644 } 00:16:59.644 ] 00:16:59.644 }' 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.644 11:55:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.904 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.904 "name": "raid_bdev1", 00:16:59.904 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:16:59.904 "strip_size_kb": 64, 00:16:59.904 "state": "online", 00:16:59.904 "raid_level": "raid5f", 00:16:59.904 "superblock": true, 00:16:59.904 "num_base_bdevs": 4, 00:16:59.904 "num_base_bdevs_discovered": 4, 00:16:59.904 "num_base_bdevs_operational": 4, 00:16:59.904 "base_bdevs_list": [ 00:16:59.904 { 00:16:59.904 "name": "spare", 00:16:59.904 "uuid": "e5407834-330c-5961-b4a6-aaee2e28e664", 00:16:59.904 "is_configured": true, 00:16:59.904 "data_offset": 2048, 00:16:59.904 "data_size": 63488 00:16:59.904 }, 00:16:59.904 { 00:16:59.904 "name": "BaseBdev2", 00:16:59.904 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:16:59.904 "is_configured": true, 00:16:59.904 "data_offset": 2048, 00:16:59.904 "data_size": 63488 00:16:59.904 }, 00:16:59.904 { 00:16:59.904 "name": "BaseBdev3", 00:16:59.904 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:16:59.904 "is_configured": true, 00:16:59.904 "data_offset": 2048, 00:16:59.904 "data_size": 63488 00:16:59.904 }, 00:16:59.904 { 00:16:59.904 "name": "BaseBdev4", 00:16:59.904 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:16:59.904 "is_configured": true, 00:16:59.904 "data_offset": 2048, 00:16:59.904 "data_size": 63488 00:16:59.904 } 00:16:59.904 ] 00:16:59.904 }' 00:16:59.904 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.904 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.164 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:00.164 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.165 [2024-11-27 11:55:26.398699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.165 [2024-11-27 11:55:26.398733] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.165 [2024-11-27 11:55:26.398823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.165 [2024-11-27 11:55:26.398933] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.165 [2024-11-27 11:55:26.398957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:00.165 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:00.424 /dev/nbd0 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:00.424 1+0 records in 00:17:00.424 1+0 records out 00:17:00.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338913 s, 12.1 MB/s 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:00.424 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:00.684 /dev/nbd1 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:00.684 1+0 records in 00:17:00.684 1+0 records out 00:17:00.684 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026026 s, 15.7 MB/s 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:00.684 11:55:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:00.945 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:00.945 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:00.945 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:00.945 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:00.945 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:00.945 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:00.945 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.205 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.465 [2024-11-27 11:55:27.599573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:01.465 [2024-11-27 11:55:27.599630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.465 [2024-11-27 11:55:27.599654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:01.465 [2024-11-27 11:55:27.599663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.465 [2024-11-27 11:55:27.602217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.465 [2024-11-27 11:55:27.602256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:01.465 [2024-11-27 11:55:27.602354] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:01.465 [2024-11-27 11:55:27.602407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:01.465 [2024-11-27 11:55:27.602539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:01.465 [2024-11-27 11:55:27.602631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:01.465 [2024-11-27 11:55:27.602739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:01.465 spare 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.465 [2024-11-27 11:55:27.702646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:01.465 [2024-11-27 11:55:27.702680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:01.465 [2024-11-27 11:55:27.702985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:01.465 [2024-11-27 11:55:27.710172] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:01.465 [2024-11-27 11:55:27.710194] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:01.465 [2024-11-27 11:55:27.710373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.465 "name": "raid_bdev1", 00:17:01.465 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:17:01.465 "strip_size_kb": 64, 00:17:01.465 "state": "online", 00:17:01.465 "raid_level": "raid5f", 00:17:01.465 "superblock": true, 00:17:01.465 "num_base_bdevs": 4, 00:17:01.465 "num_base_bdevs_discovered": 4, 00:17:01.465 "num_base_bdevs_operational": 4, 00:17:01.465 "base_bdevs_list": [ 00:17:01.465 { 00:17:01.465 "name": "spare", 00:17:01.465 "uuid": "e5407834-330c-5961-b4a6-aaee2e28e664", 00:17:01.465 "is_configured": true, 00:17:01.465 "data_offset": 2048, 00:17:01.465 "data_size": 63488 00:17:01.465 }, 00:17:01.465 { 00:17:01.465 "name": "BaseBdev2", 00:17:01.465 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:17:01.465 "is_configured": true, 00:17:01.465 "data_offset": 2048, 00:17:01.465 "data_size": 63488 00:17:01.465 }, 00:17:01.465 { 00:17:01.465 "name": "BaseBdev3", 00:17:01.465 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:17:01.465 "is_configured": true, 00:17:01.465 "data_offset": 2048, 00:17:01.465 "data_size": 63488 00:17:01.465 }, 00:17:01.465 { 00:17:01.465 "name": "BaseBdev4", 00:17:01.465 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:17:01.465 "is_configured": true, 00:17:01.465 "data_offset": 2048, 00:17:01.465 "data_size": 63488 00:17:01.465 } 00:17:01.465 ] 00:17:01.465 }' 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.465 11:55:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.036 "name": "raid_bdev1", 00:17:02.036 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:17:02.036 "strip_size_kb": 64, 00:17:02.036 "state": "online", 00:17:02.036 "raid_level": "raid5f", 00:17:02.036 "superblock": true, 00:17:02.036 "num_base_bdevs": 4, 00:17:02.036 "num_base_bdevs_discovered": 4, 00:17:02.036 "num_base_bdevs_operational": 4, 00:17:02.036 "base_bdevs_list": [ 00:17:02.036 { 00:17:02.036 "name": "spare", 00:17:02.036 "uuid": "e5407834-330c-5961-b4a6-aaee2e28e664", 00:17:02.036 "is_configured": true, 00:17:02.036 "data_offset": 2048, 00:17:02.036 "data_size": 63488 00:17:02.036 }, 00:17:02.036 { 00:17:02.036 "name": "BaseBdev2", 00:17:02.036 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:17:02.036 "is_configured": true, 00:17:02.036 "data_offset": 2048, 00:17:02.036 "data_size": 63488 00:17:02.036 }, 00:17:02.036 { 00:17:02.036 "name": "BaseBdev3", 00:17:02.036 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:17:02.036 "is_configured": true, 00:17:02.036 "data_offset": 2048, 00:17:02.036 "data_size": 63488 00:17:02.036 }, 00:17:02.036 { 00:17:02.036 "name": "BaseBdev4", 00:17:02.036 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:17:02.036 "is_configured": true, 00:17:02.036 "data_offset": 2048, 00:17:02.036 "data_size": 63488 00:17:02.036 } 00:17:02.036 ] 00:17:02.036 }' 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.036 [2024-11-27 11:55:28.394410] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.036 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.296 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.296 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.296 "name": "raid_bdev1", 00:17:02.296 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:17:02.296 "strip_size_kb": 64, 00:17:02.296 "state": "online", 00:17:02.296 "raid_level": "raid5f", 00:17:02.296 "superblock": true, 00:17:02.296 "num_base_bdevs": 4, 00:17:02.296 "num_base_bdevs_discovered": 3, 00:17:02.296 "num_base_bdevs_operational": 3, 00:17:02.296 "base_bdevs_list": [ 00:17:02.296 { 00:17:02.296 "name": null, 00:17:02.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.296 "is_configured": false, 00:17:02.296 "data_offset": 0, 00:17:02.296 "data_size": 63488 00:17:02.296 }, 00:17:02.296 { 00:17:02.296 "name": "BaseBdev2", 00:17:02.296 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:17:02.296 "is_configured": true, 00:17:02.296 "data_offset": 2048, 00:17:02.296 "data_size": 63488 00:17:02.296 }, 00:17:02.296 { 00:17:02.296 "name": "BaseBdev3", 00:17:02.296 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:17:02.296 "is_configured": true, 00:17:02.296 "data_offset": 2048, 00:17:02.296 "data_size": 63488 00:17:02.296 }, 00:17:02.296 { 00:17:02.296 "name": "BaseBdev4", 00:17:02.296 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:17:02.296 "is_configured": true, 00:17:02.296 "data_offset": 2048, 00:17:02.296 "data_size": 63488 00:17:02.296 } 00:17:02.296 ] 00:17:02.296 }' 00:17:02.296 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.296 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.556 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:02.556 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.556 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.556 [2024-11-27 11:55:28.825704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:02.556 [2024-11-27 11:55:28.825929] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:02.556 [2024-11-27 11:55:28.825956] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:02.556 [2024-11-27 11:55:28.826000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:02.556 [2024-11-27 11:55:28.842886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:02.556 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.556 11:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:02.556 [2024-11-27 11:55:28.853102] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:03.494 11:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.494 11:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.494 11:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.494 11:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.494 11:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.494 11:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.494 11:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.494 11:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.494 11:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.754 11:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.754 11:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.754 "name": "raid_bdev1", 00:17:03.754 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:17:03.754 "strip_size_kb": 64, 00:17:03.754 "state": "online", 00:17:03.754 "raid_level": "raid5f", 00:17:03.754 "superblock": true, 00:17:03.754 "num_base_bdevs": 4, 00:17:03.754 "num_base_bdevs_discovered": 4, 00:17:03.754 "num_base_bdevs_operational": 4, 00:17:03.754 "process": { 00:17:03.754 "type": "rebuild", 00:17:03.754 "target": "spare", 00:17:03.754 "progress": { 00:17:03.754 "blocks": 19200, 00:17:03.754 "percent": 10 00:17:03.754 } 00:17:03.754 }, 00:17:03.754 "base_bdevs_list": [ 00:17:03.754 { 00:17:03.754 "name": "spare", 00:17:03.754 "uuid": "e5407834-330c-5961-b4a6-aaee2e28e664", 00:17:03.754 "is_configured": true, 00:17:03.754 "data_offset": 2048, 00:17:03.754 "data_size": 63488 00:17:03.754 }, 00:17:03.754 { 00:17:03.754 "name": "BaseBdev2", 00:17:03.754 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:17:03.754 "is_configured": true, 00:17:03.754 "data_offset": 2048, 00:17:03.754 "data_size": 63488 00:17:03.754 }, 00:17:03.754 { 00:17:03.754 "name": "BaseBdev3", 00:17:03.754 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:17:03.754 "is_configured": true, 00:17:03.754 "data_offset": 2048, 00:17:03.754 "data_size": 63488 00:17:03.754 }, 00:17:03.754 { 00:17:03.754 "name": "BaseBdev4", 00:17:03.754 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:17:03.754 "is_configured": true, 00:17:03.754 "data_offset": 2048, 00:17:03.754 "data_size": 63488 00:17:03.754 } 00:17:03.754 ] 00:17:03.754 }' 00:17:03.754 11:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.754 11:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.754 11:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.754 11:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.754 11:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:03.754 11:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.754 11:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.754 [2024-11-27 11:55:29.992317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:03.754 [2024-11-27 11:55:30.061027] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:03.754 [2024-11-27 11:55:30.061164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.754 [2024-11-27 11:55:30.061182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:03.754 [2024-11-27 11:55:30.061193] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:03.754 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.754 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:03.754 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.754 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.754 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.754 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.754 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.754 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.755 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.755 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.755 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.755 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.755 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.755 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.755 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.755 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.755 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.755 "name": "raid_bdev1", 00:17:03.755 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:17:03.755 "strip_size_kb": 64, 00:17:03.755 "state": "online", 00:17:03.755 "raid_level": "raid5f", 00:17:03.755 "superblock": true, 00:17:03.755 "num_base_bdevs": 4, 00:17:03.755 "num_base_bdevs_discovered": 3, 00:17:03.755 "num_base_bdevs_operational": 3, 00:17:03.755 "base_bdevs_list": [ 00:17:03.755 { 00:17:03.755 "name": null, 00:17:03.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.755 "is_configured": false, 00:17:03.755 "data_offset": 0, 00:17:03.755 "data_size": 63488 00:17:03.755 }, 00:17:03.755 { 00:17:03.755 "name": "BaseBdev2", 00:17:03.755 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:17:03.755 "is_configured": true, 00:17:03.755 "data_offset": 2048, 00:17:03.755 "data_size": 63488 00:17:03.755 }, 00:17:03.755 { 00:17:03.755 "name": "BaseBdev3", 00:17:03.755 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:17:03.755 "is_configured": true, 00:17:03.755 "data_offset": 2048, 00:17:03.755 "data_size": 63488 00:17:03.755 }, 00:17:03.755 { 00:17:03.755 "name": "BaseBdev4", 00:17:03.755 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:17:03.755 "is_configured": true, 00:17:03.755 "data_offset": 2048, 00:17:03.755 "data_size": 63488 00:17:03.755 } 00:17:03.755 ] 00:17:03.755 }' 00:17:03.755 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.755 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.323 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:04.323 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.323 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.323 [2024-11-27 11:55:30.538998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:04.323 [2024-11-27 11:55:30.539130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.323 [2024-11-27 11:55:30.539176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:04.323 [2024-11-27 11:55:30.539214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.323 [2024-11-27 11:55:30.539747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.323 [2024-11-27 11:55:30.539814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:04.323 [2024-11-27 11:55:30.540002] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:04.323 [2024-11-27 11:55:30.540055] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:04.323 [2024-11-27 11:55:30.540103] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:04.323 [2024-11-27 11:55:30.540177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:04.323 [2024-11-27 11:55:30.555056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:04.323 spare 00:17:04.323 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.323 11:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:04.323 [2024-11-27 11:55:30.564636] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:05.260 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.260 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.260 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.260 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.260 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.260 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.260 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.260 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.260 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.260 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.260 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.260 "name": "raid_bdev1", 00:17:05.260 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:17:05.260 "strip_size_kb": 64, 00:17:05.260 "state": "online", 00:17:05.260 "raid_level": "raid5f", 00:17:05.260 "superblock": true, 00:17:05.260 "num_base_bdevs": 4, 00:17:05.260 "num_base_bdevs_discovered": 4, 00:17:05.260 "num_base_bdevs_operational": 4, 00:17:05.260 "process": { 00:17:05.260 "type": "rebuild", 00:17:05.260 "target": "spare", 00:17:05.260 "progress": { 00:17:05.260 "blocks": 19200, 00:17:05.260 "percent": 10 00:17:05.260 } 00:17:05.260 }, 00:17:05.260 "base_bdevs_list": [ 00:17:05.260 { 00:17:05.260 "name": "spare", 00:17:05.260 "uuid": "e5407834-330c-5961-b4a6-aaee2e28e664", 00:17:05.260 "is_configured": true, 00:17:05.260 "data_offset": 2048, 00:17:05.260 "data_size": 63488 00:17:05.260 }, 00:17:05.260 { 00:17:05.260 "name": "BaseBdev2", 00:17:05.260 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:17:05.260 "is_configured": true, 00:17:05.260 "data_offset": 2048, 00:17:05.260 "data_size": 63488 00:17:05.260 }, 00:17:05.260 { 00:17:05.260 "name": "BaseBdev3", 00:17:05.260 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:17:05.260 "is_configured": true, 00:17:05.260 "data_offset": 2048, 00:17:05.260 "data_size": 63488 00:17:05.260 }, 00:17:05.260 { 00:17:05.260 "name": "BaseBdev4", 00:17:05.260 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:17:05.260 "is_configured": true, 00:17:05.260 "data_offset": 2048, 00:17:05.260 "data_size": 63488 00:17:05.260 } 00:17:05.260 ] 00:17:05.260 }' 00:17:05.260 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.519 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.519 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.519 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.519 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:05.519 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.519 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.519 [2024-11-27 11:55:31.703584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:05.519 [2024-11-27 11:55:31.773248] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:05.519 [2024-11-27 11:55:31.773310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.519 [2024-11-27 11:55:31.773331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:05.519 [2024-11-27 11:55:31.773339] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:05.519 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.519 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:05.519 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.519 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.519 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.519 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.519 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.520 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.520 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.520 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.520 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.520 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.520 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.520 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.520 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.520 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.520 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.520 "name": "raid_bdev1", 00:17:05.520 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:17:05.520 "strip_size_kb": 64, 00:17:05.520 "state": "online", 00:17:05.520 "raid_level": "raid5f", 00:17:05.520 "superblock": true, 00:17:05.520 "num_base_bdevs": 4, 00:17:05.520 "num_base_bdevs_discovered": 3, 00:17:05.520 "num_base_bdevs_operational": 3, 00:17:05.520 "base_bdevs_list": [ 00:17:05.520 { 00:17:05.520 "name": null, 00:17:05.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.520 "is_configured": false, 00:17:05.520 "data_offset": 0, 00:17:05.520 "data_size": 63488 00:17:05.520 }, 00:17:05.520 { 00:17:05.520 "name": "BaseBdev2", 00:17:05.520 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:17:05.520 "is_configured": true, 00:17:05.520 "data_offset": 2048, 00:17:05.520 "data_size": 63488 00:17:05.520 }, 00:17:05.520 { 00:17:05.520 "name": "BaseBdev3", 00:17:05.520 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:17:05.520 "is_configured": true, 00:17:05.520 "data_offset": 2048, 00:17:05.520 "data_size": 63488 00:17:05.520 }, 00:17:05.520 { 00:17:05.520 "name": "BaseBdev4", 00:17:05.520 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:17:05.520 "is_configured": true, 00:17:05.520 "data_offset": 2048, 00:17:05.520 "data_size": 63488 00:17:05.520 } 00:17:05.520 ] 00:17:05.520 }' 00:17:05.520 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.520 11:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.087 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:06.087 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.087 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:06.087 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:06.087 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.087 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.087 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.087 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.087 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.087 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.087 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.087 "name": "raid_bdev1", 00:17:06.087 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:17:06.087 "strip_size_kb": 64, 00:17:06.087 "state": "online", 00:17:06.087 "raid_level": "raid5f", 00:17:06.087 "superblock": true, 00:17:06.087 "num_base_bdevs": 4, 00:17:06.087 "num_base_bdevs_discovered": 3, 00:17:06.087 "num_base_bdevs_operational": 3, 00:17:06.087 "base_bdevs_list": [ 00:17:06.087 { 00:17:06.087 "name": null, 00:17:06.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.088 "is_configured": false, 00:17:06.088 "data_offset": 0, 00:17:06.088 "data_size": 63488 00:17:06.088 }, 00:17:06.088 { 00:17:06.088 "name": "BaseBdev2", 00:17:06.088 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:17:06.088 "is_configured": true, 00:17:06.088 "data_offset": 2048, 00:17:06.088 "data_size": 63488 00:17:06.088 }, 00:17:06.088 { 00:17:06.088 "name": "BaseBdev3", 00:17:06.088 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:17:06.088 "is_configured": true, 00:17:06.088 "data_offset": 2048, 00:17:06.088 "data_size": 63488 00:17:06.088 }, 00:17:06.088 { 00:17:06.088 "name": "BaseBdev4", 00:17:06.088 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:17:06.088 "is_configured": true, 00:17:06.088 "data_offset": 2048, 00:17:06.088 "data_size": 63488 00:17:06.088 } 00:17:06.088 ] 00:17:06.088 }' 00:17:06.088 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.088 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:06.088 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.088 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:06.088 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:06.088 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.088 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.088 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.088 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:06.088 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.088 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.088 [2024-11-27 11:55:32.427735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:06.088 [2024-11-27 11:55:32.427796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.088 [2024-11-27 11:55:32.427821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:06.088 [2024-11-27 11:55:32.427831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.088 [2024-11-27 11:55:32.428358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.088 [2024-11-27 11:55:32.428440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:06.088 [2024-11-27 11:55:32.428558] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:06.088 [2024-11-27 11:55:32.428575] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:06.088 [2024-11-27 11:55:32.428590] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:06.088 [2024-11-27 11:55:32.428603] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:06.088 BaseBdev1 00:17:06.088 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.088 11:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:07.471 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:07.471 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.471 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.471 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.471 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.471 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.471 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.471 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.471 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.471 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.471 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.471 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.471 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.471 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.471 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.471 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.471 "name": "raid_bdev1", 00:17:07.471 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:17:07.471 "strip_size_kb": 64, 00:17:07.471 "state": "online", 00:17:07.471 "raid_level": "raid5f", 00:17:07.471 "superblock": true, 00:17:07.471 "num_base_bdevs": 4, 00:17:07.471 "num_base_bdevs_discovered": 3, 00:17:07.471 "num_base_bdevs_operational": 3, 00:17:07.471 "base_bdevs_list": [ 00:17:07.471 { 00:17:07.471 "name": null, 00:17:07.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.471 "is_configured": false, 00:17:07.471 "data_offset": 0, 00:17:07.471 "data_size": 63488 00:17:07.471 }, 00:17:07.471 { 00:17:07.471 "name": "BaseBdev2", 00:17:07.471 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:17:07.471 "is_configured": true, 00:17:07.471 "data_offset": 2048, 00:17:07.471 "data_size": 63488 00:17:07.471 }, 00:17:07.471 { 00:17:07.471 "name": "BaseBdev3", 00:17:07.471 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:17:07.471 "is_configured": true, 00:17:07.471 "data_offset": 2048, 00:17:07.471 "data_size": 63488 00:17:07.471 }, 00:17:07.471 { 00:17:07.471 "name": "BaseBdev4", 00:17:07.471 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:17:07.471 "is_configured": true, 00:17:07.471 "data_offset": 2048, 00:17:07.471 "data_size": 63488 00:17:07.471 } 00:17:07.471 ] 00:17:07.471 }' 00:17:07.471 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.471 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.731 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:07.731 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.731 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:07.731 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:07.731 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.731 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.731 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.731 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.731 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.731 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.731 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.731 "name": "raid_bdev1", 00:17:07.731 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:17:07.731 "strip_size_kb": 64, 00:17:07.731 "state": "online", 00:17:07.731 "raid_level": "raid5f", 00:17:07.731 "superblock": true, 00:17:07.731 "num_base_bdevs": 4, 00:17:07.731 "num_base_bdevs_discovered": 3, 00:17:07.731 "num_base_bdevs_operational": 3, 00:17:07.731 "base_bdevs_list": [ 00:17:07.731 { 00:17:07.731 "name": null, 00:17:07.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.731 "is_configured": false, 00:17:07.731 "data_offset": 0, 00:17:07.731 "data_size": 63488 00:17:07.731 }, 00:17:07.731 { 00:17:07.731 "name": "BaseBdev2", 00:17:07.731 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:17:07.731 "is_configured": true, 00:17:07.731 "data_offset": 2048, 00:17:07.731 "data_size": 63488 00:17:07.731 }, 00:17:07.731 { 00:17:07.731 "name": "BaseBdev3", 00:17:07.731 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:17:07.731 "is_configured": true, 00:17:07.731 "data_offset": 2048, 00:17:07.731 "data_size": 63488 00:17:07.731 }, 00:17:07.731 { 00:17:07.731 "name": "BaseBdev4", 00:17:07.731 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:17:07.731 "is_configured": true, 00:17:07.731 "data_offset": 2048, 00:17:07.731 "data_size": 63488 00:17:07.731 } 00:17:07.731 ] 00:17:07.731 }' 00:17:07.731 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.731 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:07.731 11:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.731 11:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:07.731 11:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:07.731 11:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:07.731 11:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:07.731 11:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:07.731 11:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.731 11:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:07.731 11:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.731 11:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:07.731 11:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.731 11:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.731 [2024-11-27 11:55:34.037078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.731 [2024-11-27 11:55:34.037272] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:07.731 [2024-11-27 11:55:34.037289] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:07.731 request: 00:17:07.731 { 00:17:07.731 "base_bdev": "BaseBdev1", 00:17:07.731 "raid_bdev": "raid_bdev1", 00:17:07.731 "method": "bdev_raid_add_base_bdev", 00:17:07.731 "req_id": 1 00:17:07.731 } 00:17:07.731 Got JSON-RPC error response 00:17:07.731 response: 00:17:07.731 { 00:17:07.731 "code": -22, 00:17:07.731 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:07.731 } 00:17:07.731 11:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:07.731 11:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:07.731 11:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:07.731 11:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:07.731 11:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:07.731 11:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:08.669 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:08.669 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.669 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.669 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.669 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.929 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.929 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.929 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.929 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.929 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.929 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.929 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.929 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.929 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.929 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.929 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.929 "name": "raid_bdev1", 00:17:08.929 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:17:08.929 "strip_size_kb": 64, 00:17:08.929 "state": "online", 00:17:08.929 "raid_level": "raid5f", 00:17:08.929 "superblock": true, 00:17:08.929 "num_base_bdevs": 4, 00:17:08.929 "num_base_bdevs_discovered": 3, 00:17:08.929 "num_base_bdevs_operational": 3, 00:17:08.929 "base_bdevs_list": [ 00:17:08.929 { 00:17:08.929 "name": null, 00:17:08.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.929 "is_configured": false, 00:17:08.929 "data_offset": 0, 00:17:08.929 "data_size": 63488 00:17:08.929 }, 00:17:08.930 { 00:17:08.930 "name": "BaseBdev2", 00:17:08.930 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:17:08.930 "is_configured": true, 00:17:08.930 "data_offset": 2048, 00:17:08.930 "data_size": 63488 00:17:08.930 }, 00:17:08.930 { 00:17:08.930 "name": "BaseBdev3", 00:17:08.930 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:17:08.930 "is_configured": true, 00:17:08.930 "data_offset": 2048, 00:17:08.930 "data_size": 63488 00:17:08.930 }, 00:17:08.930 { 00:17:08.930 "name": "BaseBdev4", 00:17:08.930 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:17:08.930 "is_configured": true, 00:17:08.930 "data_offset": 2048, 00:17:08.930 "data_size": 63488 00:17:08.930 } 00:17:08.930 ] 00:17:08.930 }' 00:17:08.930 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.930 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.189 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:09.189 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.189 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:09.189 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:09.189 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.189 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.189 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.189 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.189 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.189 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.189 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.189 "name": "raid_bdev1", 00:17:09.189 "uuid": "c53fa928-1bb3-410b-8e45-1d42877ad7b6", 00:17:09.189 "strip_size_kb": 64, 00:17:09.189 "state": "online", 00:17:09.189 "raid_level": "raid5f", 00:17:09.189 "superblock": true, 00:17:09.189 "num_base_bdevs": 4, 00:17:09.189 "num_base_bdevs_discovered": 3, 00:17:09.189 "num_base_bdevs_operational": 3, 00:17:09.189 "base_bdevs_list": [ 00:17:09.189 { 00:17:09.189 "name": null, 00:17:09.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.189 "is_configured": false, 00:17:09.189 "data_offset": 0, 00:17:09.189 "data_size": 63488 00:17:09.189 }, 00:17:09.189 { 00:17:09.189 "name": "BaseBdev2", 00:17:09.189 "uuid": "c6ff9e63-fe7e-5ebc-8c40-10af58f4b571", 00:17:09.189 "is_configured": true, 00:17:09.189 "data_offset": 2048, 00:17:09.189 "data_size": 63488 00:17:09.189 }, 00:17:09.189 { 00:17:09.189 "name": "BaseBdev3", 00:17:09.190 "uuid": "046826e7-762f-5b30-95b7-a2f9f8c3b530", 00:17:09.190 "is_configured": true, 00:17:09.190 "data_offset": 2048, 00:17:09.190 "data_size": 63488 00:17:09.190 }, 00:17:09.190 { 00:17:09.190 "name": "BaseBdev4", 00:17:09.190 "uuid": "95d4523f-97e1-5f2e-9bac-2ce57473f6ae", 00:17:09.190 "is_configured": true, 00:17:09.190 "data_offset": 2048, 00:17:09.190 "data_size": 63488 00:17:09.190 } 00:17:09.190 ] 00:17:09.190 }' 00:17:09.190 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.190 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:09.190 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.449 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:09.449 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85194 00:17:09.449 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85194 ']' 00:17:09.449 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85194 00:17:09.449 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:09.449 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.449 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85194 00:17:09.449 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:09.449 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:09.449 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85194' 00:17:09.449 killing process with pid 85194 00:17:09.449 Received shutdown signal, test time was about 60.000000 seconds 00:17:09.449 00:17:09.449 Latency(us) 00:17:09.449 [2024-11-27T11:55:35.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.449 [2024-11-27T11:55:35.834Z] =================================================================================================================== 00:17:09.449 [2024-11-27T11:55:35.834Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:09.449 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85194 00:17:09.449 [2024-11-27 11:55:35.630526] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:09.449 [2024-11-27 11:55:35.630663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.449 11:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85194 00:17:09.449 [2024-11-27 11:55:35.630750] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.449 [2024-11-27 11:55:35.630763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:10.018 [2024-11-27 11:55:36.113847] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:10.955 11:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:10.955 00:17:10.955 real 0m26.904s 00:17:10.955 user 0m33.711s 00:17:10.955 sys 0m2.968s 00:17:10.955 ************************************ 00:17:10.955 END TEST raid5f_rebuild_test_sb 00:17:10.955 ************************************ 00:17:10.955 11:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.955 11:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.956 11:55:37 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:10.956 11:55:37 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:10.956 11:55:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:10.956 11:55:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.956 11:55:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:10.956 ************************************ 00:17:10.956 START TEST raid_state_function_test_sb_4k 00:17:10.956 ************************************ 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85999 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85999' 00:17:10.956 Process raid pid: 85999 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85999 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85999 ']' 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.956 11:55:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.215 [2024-11-27 11:55:37.396675] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:17:11.215 [2024-11-27 11:55:37.396876] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.215 [2024-11-27 11:55:37.574247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.474 [2024-11-27 11:55:37.688344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.733 [2024-11-27 11:55:37.891176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.733 [2024-11-27 11:55:37.891314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.992 [2024-11-27 11:55:38.230151] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:11.992 [2024-11-27 11:55:38.230207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:11.992 [2024-11-27 11:55:38.230217] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:11.992 [2024-11-27 11:55:38.230227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.992 "name": "Existed_Raid", 00:17:11.992 "uuid": "254fa153-9ae6-4e0f-9972-b88dc8448da0", 00:17:11.992 "strip_size_kb": 0, 00:17:11.992 "state": "configuring", 00:17:11.992 "raid_level": "raid1", 00:17:11.992 "superblock": true, 00:17:11.992 "num_base_bdevs": 2, 00:17:11.992 "num_base_bdevs_discovered": 0, 00:17:11.992 "num_base_bdevs_operational": 2, 00:17:11.992 "base_bdevs_list": [ 00:17:11.992 { 00:17:11.992 "name": "BaseBdev1", 00:17:11.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.992 "is_configured": false, 00:17:11.992 "data_offset": 0, 00:17:11.992 "data_size": 0 00:17:11.992 }, 00:17:11.992 { 00:17:11.992 "name": "BaseBdev2", 00:17:11.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.992 "is_configured": false, 00:17:11.992 "data_offset": 0, 00:17:11.992 "data_size": 0 00:17:11.992 } 00:17:11.992 ] 00:17:11.992 }' 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.992 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.565 [2024-11-27 11:55:38.705291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:12.565 [2024-11-27 11:55:38.705387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.565 [2024-11-27 11:55:38.713274] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:12.565 [2024-11-27 11:55:38.713372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:12.565 [2024-11-27 11:55:38.713423] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:12.565 [2024-11-27 11:55:38.713458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.565 [2024-11-27 11:55:38.756553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:12.565 BaseBdev1 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.565 [ 00:17:12.565 { 00:17:12.565 "name": "BaseBdev1", 00:17:12.565 "aliases": [ 00:17:12.565 "77ba913a-fdd7-492f-a1fc-29697ee86825" 00:17:12.565 ], 00:17:12.565 "product_name": "Malloc disk", 00:17:12.565 "block_size": 4096, 00:17:12.565 "num_blocks": 8192, 00:17:12.565 "uuid": "77ba913a-fdd7-492f-a1fc-29697ee86825", 00:17:12.565 "assigned_rate_limits": { 00:17:12.565 "rw_ios_per_sec": 0, 00:17:12.565 "rw_mbytes_per_sec": 0, 00:17:12.565 "r_mbytes_per_sec": 0, 00:17:12.565 "w_mbytes_per_sec": 0 00:17:12.565 }, 00:17:12.565 "claimed": true, 00:17:12.565 "claim_type": "exclusive_write", 00:17:12.565 "zoned": false, 00:17:12.565 "supported_io_types": { 00:17:12.565 "read": true, 00:17:12.565 "write": true, 00:17:12.565 "unmap": true, 00:17:12.565 "flush": true, 00:17:12.565 "reset": true, 00:17:12.565 "nvme_admin": false, 00:17:12.565 "nvme_io": false, 00:17:12.565 "nvme_io_md": false, 00:17:12.565 "write_zeroes": true, 00:17:12.565 "zcopy": true, 00:17:12.565 "get_zone_info": false, 00:17:12.565 "zone_management": false, 00:17:12.565 "zone_append": false, 00:17:12.565 "compare": false, 00:17:12.565 "compare_and_write": false, 00:17:12.565 "abort": true, 00:17:12.565 "seek_hole": false, 00:17:12.565 "seek_data": false, 00:17:12.565 "copy": true, 00:17:12.565 "nvme_iov_md": false 00:17:12.565 }, 00:17:12.565 "memory_domains": [ 00:17:12.565 { 00:17:12.565 "dma_device_id": "system", 00:17:12.565 "dma_device_type": 1 00:17:12.565 }, 00:17:12.565 { 00:17:12.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.565 "dma_device_type": 2 00:17:12.565 } 00:17:12.565 ], 00:17:12.565 "driver_specific": {} 00:17:12.565 } 00:17:12.565 ] 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.565 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.566 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.566 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.566 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.566 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.566 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.566 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.566 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.566 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.566 "name": "Existed_Raid", 00:17:12.566 "uuid": "9e1c2cba-4ee8-42b5-b07c-7b6e88f48bdb", 00:17:12.566 "strip_size_kb": 0, 00:17:12.566 "state": "configuring", 00:17:12.566 "raid_level": "raid1", 00:17:12.566 "superblock": true, 00:17:12.566 "num_base_bdevs": 2, 00:17:12.566 "num_base_bdevs_discovered": 1, 00:17:12.566 "num_base_bdevs_operational": 2, 00:17:12.566 "base_bdevs_list": [ 00:17:12.566 { 00:17:12.566 "name": "BaseBdev1", 00:17:12.566 "uuid": "77ba913a-fdd7-492f-a1fc-29697ee86825", 00:17:12.566 "is_configured": true, 00:17:12.566 "data_offset": 256, 00:17:12.566 "data_size": 7936 00:17:12.566 }, 00:17:12.566 { 00:17:12.566 "name": "BaseBdev2", 00:17:12.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.566 "is_configured": false, 00:17:12.566 "data_offset": 0, 00:17:12.566 "data_size": 0 00:17:12.566 } 00:17:12.566 ] 00:17:12.566 }' 00:17:12.566 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.566 11:55:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.135 [2024-11-27 11:55:39.259788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:13.135 [2024-11-27 11:55:39.259942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.135 [2024-11-27 11:55:39.271792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:13.135 [2024-11-27 11:55:39.273666] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:13.135 [2024-11-27 11:55:39.273708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.135 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.136 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.136 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.136 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.136 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.136 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.136 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.136 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.136 "name": "Existed_Raid", 00:17:13.136 "uuid": "43e58530-f3f1-48fe-97b0-9630e124b180", 00:17:13.136 "strip_size_kb": 0, 00:17:13.136 "state": "configuring", 00:17:13.136 "raid_level": "raid1", 00:17:13.136 "superblock": true, 00:17:13.136 "num_base_bdevs": 2, 00:17:13.136 "num_base_bdevs_discovered": 1, 00:17:13.136 "num_base_bdevs_operational": 2, 00:17:13.136 "base_bdevs_list": [ 00:17:13.136 { 00:17:13.136 "name": "BaseBdev1", 00:17:13.136 "uuid": "77ba913a-fdd7-492f-a1fc-29697ee86825", 00:17:13.136 "is_configured": true, 00:17:13.136 "data_offset": 256, 00:17:13.136 "data_size": 7936 00:17:13.136 }, 00:17:13.136 { 00:17:13.136 "name": "BaseBdev2", 00:17:13.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.136 "is_configured": false, 00:17:13.136 "data_offset": 0, 00:17:13.136 "data_size": 0 00:17:13.136 } 00:17:13.136 ] 00:17:13.136 }' 00:17:13.136 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.136 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.395 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:13.395 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.395 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.395 [2024-11-27 11:55:39.760274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:13.395 [2024-11-27 11:55:39.760627] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:13.395 [2024-11-27 11:55:39.760682] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:13.395 [2024-11-27 11:55:39.761004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:13.395 [2024-11-27 11:55:39.761239] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:13.395 [2024-11-27 11:55:39.761288] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:13.395 BaseBdev2 00:17:13.395 [2024-11-27 11:55:39.761490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.395 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.395 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:13.396 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:13.396 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:13.396 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:13.396 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:13.396 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:13.396 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:13.396 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.396 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.396 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.396 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:13.396 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.396 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.655 [ 00:17:13.655 { 00:17:13.655 "name": "BaseBdev2", 00:17:13.655 "aliases": [ 00:17:13.655 "266632db-2fc9-45a0-859e-0219e7601758" 00:17:13.655 ], 00:17:13.655 "product_name": "Malloc disk", 00:17:13.655 "block_size": 4096, 00:17:13.655 "num_blocks": 8192, 00:17:13.655 "uuid": "266632db-2fc9-45a0-859e-0219e7601758", 00:17:13.655 "assigned_rate_limits": { 00:17:13.655 "rw_ios_per_sec": 0, 00:17:13.655 "rw_mbytes_per_sec": 0, 00:17:13.655 "r_mbytes_per_sec": 0, 00:17:13.655 "w_mbytes_per_sec": 0 00:17:13.655 }, 00:17:13.656 "claimed": true, 00:17:13.656 "claim_type": "exclusive_write", 00:17:13.656 "zoned": false, 00:17:13.656 "supported_io_types": { 00:17:13.656 "read": true, 00:17:13.656 "write": true, 00:17:13.656 "unmap": true, 00:17:13.656 "flush": true, 00:17:13.656 "reset": true, 00:17:13.656 "nvme_admin": false, 00:17:13.656 "nvme_io": false, 00:17:13.656 "nvme_io_md": false, 00:17:13.656 "write_zeroes": true, 00:17:13.656 "zcopy": true, 00:17:13.656 "get_zone_info": false, 00:17:13.656 "zone_management": false, 00:17:13.656 "zone_append": false, 00:17:13.656 "compare": false, 00:17:13.656 "compare_and_write": false, 00:17:13.656 "abort": true, 00:17:13.656 "seek_hole": false, 00:17:13.656 "seek_data": false, 00:17:13.656 "copy": true, 00:17:13.656 "nvme_iov_md": false 00:17:13.656 }, 00:17:13.656 "memory_domains": [ 00:17:13.656 { 00:17:13.656 "dma_device_id": "system", 00:17:13.656 "dma_device_type": 1 00:17:13.656 }, 00:17:13.656 { 00:17:13.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.656 "dma_device_type": 2 00:17:13.656 } 00:17:13.656 ], 00:17:13.656 "driver_specific": {} 00:17:13.656 } 00:17:13.656 ] 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.656 "name": "Existed_Raid", 00:17:13.656 "uuid": "43e58530-f3f1-48fe-97b0-9630e124b180", 00:17:13.656 "strip_size_kb": 0, 00:17:13.656 "state": "online", 00:17:13.656 "raid_level": "raid1", 00:17:13.656 "superblock": true, 00:17:13.656 "num_base_bdevs": 2, 00:17:13.656 "num_base_bdevs_discovered": 2, 00:17:13.656 "num_base_bdevs_operational": 2, 00:17:13.656 "base_bdevs_list": [ 00:17:13.656 { 00:17:13.656 "name": "BaseBdev1", 00:17:13.656 "uuid": "77ba913a-fdd7-492f-a1fc-29697ee86825", 00:17:13.656 "is_configured": true, 00:17:13.656 "data_offset": 256, 00:17:13.656 "data_size": 7936 00:17:13.656 }, 00:17:13.656 { 00:17:13.656 "name": "BaseBdev2", 00:17:13.656 "uuid": "266632db-2fc9-45a0-859e-0219e7601758", 00:17:13.656 "is_configured": true, 00:17:13.656 "data_offset": 256, 00:17:13.656 "data_size": 7936 00:17:13.656 } 00:17:13.656 ] 00:17:13.656 }' 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.656 11:55:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.916 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:13.916 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:13.916 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:13.916 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:13.916 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:13.916 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:13.916 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:13.916 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:13.916 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.916 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.916 [2024-11-27 11:55:40.255792] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:13.916 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.916 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:13.916 "name": "Existed_Raid", 00:17:13.916 "aliases": [ 00:17:13.916 "43e58530-f3f1-48fe-97b0-9630e124b180" 00:17:13.916 ], 00:17:13.916 "product_name": "Raid Volume", 00:17:13.916 "block_size": 4096, 00:17:13.916 "num_blocks": 7936, 00:17:13.916 "uuid": "43e58530-f3f1-48fe-97b0-9630e124b180", 00:17:13.916 "assigned_rate_limits": { 00:17:13.916 "rw_ios_per_sec": 0, 00:17:13.916 "rw_mbytes_per_sec": 0, 00:17:13.916 "r_mbytes_per_sec": 0, 00:17:13.916 "w_mbytes_per_sec": 0 00:17:13.916 }, 00:17:13.916 "claimed": false, 00:17:13.916 "zoned": false, 00:17:13.916 "supported_io_types": { 00:17:13.916 "read": true, 00:17:13.916 "write": true, 00:17:13.916 "unmap": false, 00:17:13.916 "flush": false, 00:17:13.916 "reset": true, 00:17:13.916 "nvme_admin": false, 00:17:13.916 "nvme_io": false, 00:17:13.916 "nvme_io_md": false, 00:17:13.916 "write_zeroes": true, 00:17:13.916 "zcopy": false, 00:17:13.916 "get_zone_info": false, 00:17:13.916 "zone_management": false, 00:17:13.916 "zone_append": false, 00:17:13.916 "compare": false, 00:17:13.916 "compare_and_write": false, 00:17:13.916 "abort": false, 00:17:13.916 "seek_hole": false, 00:17:13.916 "seek_data": false, 00:17:13.916 "copy": false, 00:17:13.916 "nvme_iov_md": false 00:17:13.916 }, 00:17:13.916 "memory_domains": [ 00:17:13.916 { 00:17:13.916 "dma_device_id": "system", 00:17:13.916 "dma_device_type": 1 00:17:13.916 }, 00:17:13.916 { 00:17:13.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.916 "dma_device_type": 2 00:17:13.916 }, 00:17:13.916 { 00:17:13.916 "dma_device_id": "system", 00:17:13.916 "dma_device_type": 1 00:17:13.916 }, 00:17:13.916 { 00:17:13.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.916 "dma_device_type": 2 00:17:13.916 } 00:17:13.916 ], 00:17:13.916 "driver_specific": { 00:17:13.916 "raid": { 00:17:13.916 "uuid": "43e58530-f3f1-48fe-97b0-9630e124b180", 00:17:13.916 "strip_size_kb": 0, 00:17:13.916 "state": "online", 00:17:13.916 "raid_level": "raid1", 00:17:13.916 "superblock": true, 00:17:13.916 "num_base_bdevs": 2, 00:17:13.916 "num_base_bdevs_discovered": 2, 00:17:13.916 "num_base_bdevs_operational": 2, 00:17:13.916 "base_bdevs_list": [ 00:17:13.916 { 00:17:13.916 "name": "BaseBdev1", 00:17:13.916 "uuid": "77ba913a-fdd7-492f-a1fc-29697ee86825", 00:17:13.916 "is_configured": true, 00:17:13.916 "data_offset": 256, 00:17:13.916 "data_size": 7936 00:17:13.916 }, 00:17:13.916 { 00:17:13.916 "name": "BaseBdev2", 00:17:13.916 "uuid": "266632db-2fc9-45a0-859e-0219e7601758", 00:17:13.916 "is_configured": true, 00:17:13.916 "data_offset": 256, 00:17:13.916 "data_size": 7936 00:17:13.916 } 00:17:13.916 ] 00:17:13.916 } 00:17:13.916 } 00:17:13.916 }' 00:17:13.916 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:14.177 BaseBdev2' 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.177 [2024-11-27 11:55:40.447248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.177 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.436 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.436 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.436 "name": "Existed_Raid", 00:17:14.436 "uuid": "43e58530-f3f1-48fe-97b0-9630e124b180", 00:17:14.436 "strip_size_kb": 0, 00:17:14.436 "state": "online", 00:17:14.437 "raid_level": "raid1", 00:17:14.437 "superblock": true, 00:17:14.437 "num_base_bdevs": 2, 00:17:14.437 "num_base_bdevs_discovered": 1, 00:17:14.437 "num_base_bdevs_operational": 1, 00:17:14.437 "base_bdevs_list": [ 00:17:14.437 { 00:17:14.437 "name": null, 00:17:14.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.437 "is_configured": false, 00:17:14.437 "data_offset": 0, 00:17:14.437 "data_size": 7936 00:17:14.437 }, 00:17:14.437 { 00:17:14.437 "name": "BaseBdev2", 00:17:14.437 "uuid": "266632db-2fc9-45a0-859e-0219e7601758", 00:17:14.437 "is_configured": true, 00:17:14.437 "data_offset": 256, 00:17:14.437 "data_size": 7936 00:17:14.437 } 00:17:14.437 ] 00:17:14.437 }' 00:17:14.437 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.437 11:55:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.696 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:14.696 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:14.696 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.696 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.696 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.696 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:14.696 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.696 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:14.697 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:14.697 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:14.697 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.697 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.697 [2024-11-27 11:55:41.065684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:14.697 [2024-11-27 11:55:41.065792] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.957 [2024-11-27 11:55:41.165183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.957 [2024-11-27 11:55:41.165236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.957 [2024-11-27 11:55:41.165249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85999 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85999 ']' 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85999 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85999 00:17:14.957 killing process with pid 85999 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85999' 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85999 00:17:14.957 [2024-11-27 11:55:41.247272] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:14.957 11:55:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85999 00:17:14.957 [2024-11-27 11:55:41.263845] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:16.338 ************************************ 00:17:16.338 END TEST raid_state_function_test_sb_4k 00:17:16.338 ************************************ 00:17:16.338 11:55:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:16.338 00:17:16.338 real 0m5.092s 00:17:16.338 user 0m7.336s 00:17:16.338 sys 0m0.879s 00:17:16.338 11:55:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.338 11:55:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.338 11:55:42 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:16.338 11:55:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:16.338 11:55:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.338 11:55:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:16.338 ************************************ 00:17:16.338 START TEST raid_superblock_test_4k 00:17:16.338 ************************************ 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86251 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86251 00:17:16.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86251 ']' 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.338 11:55:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.338 [2024-11-27 11:55:42.548552] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:17:16.338 [2024-11-27 11:55:42.548758] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86251 ] 00:17:16.598 [2024-11-27 11:55:42.723480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.598 [2024-11-27 11:55:42.836177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.858 [2024-11-27 11:55:43.028406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:16.858 [2024-11-27 11:55:43.028556] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.118 malloc1 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.118 [2024-11-27 11:55:43.430939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:17.118 [2024-11-27 11:55:43.430997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.118 [2024-11-27 11:55:43.431020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:17.118 [2024-11-27 11:55:43.431029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.118 [2024-11-27 11:55:43.433142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.118 [2024-11-27 11:55:43.433182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:17.118 pt1 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.118 malloc2 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.118 [2024-11-27 11:55:43.484979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:17.118 [2024-11-27 11:55:43.485087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.118 [2024-11-27 11:55:43.485132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:17.118 [2024-11-27 11:55:43.485166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.118 [2024-11-27 11:55:43.487304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.118 [2024-11-27 11:55:43.487372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:17.118 pt2 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.118 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.118 [2024-11-27 11:55:43.497022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:17.118 [2024-11-27 11:55:43.498825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:17.118 [2024-11-27 11:55:43.499054] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:17.118 [2024-11-27 11:55:43.499105] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:17.118 [2024-11-27 11:55:43.499392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:17.118 [2024-11-27 11:55:43.499605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:17.118 [2024-11-27 11:55:43.499655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:17.118 [2024-11-27 11:55:43.499877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.378 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.379 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:17.379 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.379 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.379 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.379 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.379 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.379 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.379 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.379 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.379 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.379 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.379 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.379 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.379 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.379 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.379 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.379 "name": "raid_bdev1", 00:17:17.379 "uuid": "c8a5af2c-730d-495a-a2d8-4f186d5fcf64", 00:17:17.379 "strip_size_kb": 0, 00:17:17.379 "state": "online", 00:17:17.379 "raid_level": "raid1", 00:17:17.379 "superblock": true, 00:17:17.379 "num_base_bdevs": 2, 00:17:17.379 "num_base_bdevs_discovered": 2, 00:17:17.379 "num_base_bdevs_operational": 2, 00:17:17.379 "base_bdevs_list": [ 00:17:17.379 { 00:17:17.379 "name": "pt1", 00:17:17.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:17.379 "is_configured": true, 00:17:17.379 "data_offset": 256, 00:17:17.379 "data_size": 7936 00:17:17.379 }, 00:17:17.379 { 00:17:17.379 "name": "pt2", 00:17:17.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.379 "is_configured": true, 00:17:17.379 "data_offset": 256, 00:17:17.379 "data_size": 7936 00:17:17.379 } 00:17:17.379 ] 00:17:17.379 }' 00:17:17.379 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.379 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.639 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:17.639 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:17.639 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:17.639 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:17.639 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:17.639 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:17.639 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:17.639 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:17.639 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.639 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.639 [2024-11-27 11:55:43.920570] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.639 11:55:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.639 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:17.639 "name": "raid_bdev1", 00:17:17.639 "aliases": [ 00:17:17.639 "c8a5af2c-730d-495a-a2d8-4f186d5fcf64" 00:17:17.639 ], 00:17:17.639 "product_name": "Raid Volume", 00:17:17.639 "block_size": 4096, 00:17:17.639 "num_blocks": 7936, 00:17:17.639 "uuid": "c8a5af2c-730d-495a-a2d8-4f186d5fcf64", 00:17:17.639 "assigned_rate_limits": { 00:17:17.639 "rw_ios_per_sec": 0, 00:17:17.639 "rw_mbytes_per_sec": 0, 00:17:17.639 "r_mbytes_per_sec": 0, 00:17:17.639 "w_mbytes_per_sec": 0 00:17:17.639 }, 00:17:17.639 "claimed": false, 00:17:17.639 "zoned": false, 00:17:17.639 "supported_io_types": { 00:17:17.639 "read": true, 00:17:17.639 "write": true, 00:17:17.639 "unmap": false, 00:17:17.639 "flush": false, 00:17:17.639 "reset": true, 00:17:17.639 "nvme_admin": false, 00:17:17.639 "nvme_io": false, 00:17:17.639 "nvme_io_md": false, 00:17:17.639 "write_zeroes": true, 00:17:17.639 "zcopy": false, 00:17:17.639 "get_zone_info": false, 00:17:17.639 "zone_management": false, 00:17:17.639 "zone_append": false, 00:17:17.639 "compare": false, 00:17:17.639 "compare_and_write": false, 00:17:17.639 "abort": false, 00:17:17.639 "seek_hole": false, 00:17:17.639 "seek_data": false, 00:17:17.639 "copy": false, 00:17:17.639 "nvme_iov_md": false 00:17:17.639 }, 00:17:17.639 "memory_domains": [ 00:17:17.639 { 00:17:17.639 "dma_device_id": "system", 00:17:17.639 "dma_device_type": 1 00:17:17.639 }, 00:17:17.639 { 00:17:17.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.639 "dma_device_type": 2 00:17:17.639 }, 00:17:17.639 { 00:17:17.639 "dma_device_id": "system", 00:17:17.639 "dma_device_type": 1 00:17:17.639 }, 00:17:17.639 { 00:17:17.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.639 "dma_device_type": 2 00:17:17.639 } 00:17:17.639 ], 00:17:17.639 "driver_specific": { 00:17:17.639 "raid": { 00:17:17.639 "uuid": "c8a5af2c-730d-495a-a2d8-4f186d5fcf64", 00:17:17.639 "strip_size_kb": 0, 00:17:17.639 "state": "online", 00:17:17.639 "raid_level": "raid1", 00:17:17.639 "superblock": true, 00:17:17.639 "num_base_bdevs": 2, 00:17:17.639 "num_base_bdevs_discovered": 2, 00:17:17.639 "num_base_bdevs_operational": 2, 00:17:17.639 "base_bdevs_list": [ 00:17:17.639 { 00:17:17.639 "name": "pt1", 00:17:17.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:17.639 "is_configured": true, 00:17:17.639 "data_offset": 256, 00:17:17.639 "data_size": 7936 00:17:17.639 }, 00:17:17.639 { 00:17:17.639 "name": "pt2", 00:17:17.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.639 "is_configured": true, 00:17:17.639 "data_offset": 256, 00:17:17.639 "data_size": 7936 00:17:17.639 } 00:17:17.639 ] 00:17:17.639 } 00:17:17.639 } 00:17:17.639 }' 00:17:17.639 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:17.639 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:17.639 pt2' 00:17:17.639 11:55:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.899 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:17.899 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.899 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:17.899 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:17.900 [2024-11-27 11:55:44.156321] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c8a5af2c-730d-495a-a2d8-4f186d5fcf64 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z c8a5af2c-730d-495a-a2d8-4f186d5fcf64 ']' 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.900 [2024-11-27 11:55:44.203874] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.900 [2024-11-27 11:55:44.203900] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.900 [2024-11-27 11:55:44.203999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.900 [2024-11-27 11:55:44.204061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.900 [2024-11-27 11:55:44.204072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.900 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.161 [2024-11-27 11:55:44.343703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:18.161 [2024-11-27 11:55:44.345766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:18.161 [2024-11-27 11:55:44.345841] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:18.161 [2024-11-27 11:55:44.345913] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:18.161 [2024-11-27 11:55:44.345928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.161 [2024-11-27 11:55:44.345939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:18.161 request: 00:17:18.161 { 00:17:18.161 "name": "raid_bdev1", 00:17:18.161 "raid_level": "raid1", 00:17:18.161 "base_bdevs": [ 00:17:18.161 "malloc1", 00:17:18.161 "malloc2" 00:17:18.161 ], 00:17:18.161 "superblock": false, 00:17:18.161 "method": "bdev_raid_create", 00:17:18.161 "req_id": 1 00:17:18.161 } 00:17:18.161 Got JSON-RPC error response 00:17:18.161 response: 00:17:18.161 { 00:17:18.161 "code": -17, 00:17:18.161 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:18.161 } 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.161 [2024-11-27 11:55:44.399552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:18.161 [2024-11-27 11:55:44.399658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.161 [2024-11-27 11:55:44.399697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:18.161 [2024-11-27 11:55:44.399731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.161 [2024-11-27 11:55:44.402179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.161 [2024-11-27 11:55:44.402253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:18.161 [2024-11-27 11:55:44.402372] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:18.161 [2024-11-27 11:55:44.402459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:18.161 pt1 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:18.161 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.162 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:18.162 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.162 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.162 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.162 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.162 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.162 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.162 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.162 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.162 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.162 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.162 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.162 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.162 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.162 "name": "raid_bdev1", 00:17:18.162 "uuid": "c8a5af2c-730d-495a-a2d8-4f186d5fcf64", 00:17:18.162 "strip_size_kb": 0, 00:17:18.162 "state": "configuring", 00:17:18.162 "raid_level": "raid1", 00:17:18.162 "superblock": true, 00:17:18.162 "num_base_bdevs": 2, 00:17:18.162 "num_base_bdevs_discovered": 1, 00:17:18.162 "num_base_bdevs_operational": 2, 00:17:18.162 "base_bdevs_list": [ 00:17:18.162 { 00:17:18.162 "name": "pt1", 00:17:18.162 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:18.162 "is_configured": true, 00:17:18.162 "data_offset": 256, 00:17:18.162 "data_size": 7936 00:17:18.162 }, 00:17:18.162 { 00:17:18.162 "name": null, 00:17:18.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.162 "is_configured": false, 00:17:18.162 "data_offset": 256, 00:17:18.162 "data_size": 7936 00:17:18.162 } 00:17:18.162 ] 00:17:18.162 }' 00:17:18.162 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.162 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.732 [2024-11-27 11:55:44.862813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:18.732 [2024-11-27 11:55:44.862972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.732 [2024-11-27 11:55:44.863025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:18.732 [2024-11-27 11:55:44.863057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.732 [2024-11-27 11:55:44.863531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.732 [2024-11-27 11:55:44.863554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:18.732 [2024-11-27 11:55:44.863637] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:18.732 [2024-11-27 11:55:44.863665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:18.732 [2024-11-27 11:55:44.863797] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:18.732 [2024-11-27 11:55:44.863808] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:18.732 [2024-11-27 11:55:44.864078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:18.732 [2024-11-27 11:55:44.864236] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:18.732 [2024-11-27 11:55:44.864296] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:18.732 [2024-11-27 11:55:44.864460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.732 pt2 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.732 "name": "raid_bdev1", 00:17:18.732 "uuid": "c8a5af2c-730d-495a-a2d8-4f186d5fcf64", 00:17:18.732 "strip_size_kb": 0, 00:17:18.732 "state": "online", 00:17:18.732 "raid_level": "raid1", 00:17:18.732 "superblock": true, 00:17:18.732 "num_base_bdevs": 2, 00:17:18.732 "num_base_bdevs_discovered": 2, 00:17:18.732 "num_base_bdevs_operational": 2, 00:17:18.732 "base_bdevs_list": [ 00:17:18.732 { 00:17:18.732 "name": "pt1", 00:17:18.732 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:18.732 "is_configured": true, 00:17:18.732 "data_offset": 256, 00:17:18.732 "data_size": 7936 00:17:18.732 }, 00:17:18.732 { 00:17:18.732 "name": "pt2", 00:17:18.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.732 "is_configured": true, 00:17:18.732 "data_offset": 256, 00:17:18.732 "data_size": 7936 00:17:18.732 } 00:17:18.732 ] 00:17:18.732 }' 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.732 11:55:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.993 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:18.993 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:18.993 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:18.993 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:18.993 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:18.993 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:18.993 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:18.993 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:18.993 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.993 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.993 [2024-11-27 11:55:45.350224] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:18.993 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:19.253 "name": "raid_bdev1", 00:17:19.253 "aliases": [ 00:17:19.253 "c8a5af2c-730d-495a-a2d8-4f186d5fcf64" 00:17:19.253 ], 00:17:19.253 "product_name": "Raid Volume", 00:17:19.253 "block_size": 4096, 00:17:19.253 "num_blocks": 7936, 00:17:19.253 "uuid": "c8a5af2c-730d-495a-a2d8-4f186d5fcf64", 00:17:19.253 "assigned_rate_limits": { 00:17:19.253 "rw_ios_per_sec": 0, 00:17:19.253 "rw_mbytes_per_sec": 0, 00:17:19.253 "r_mbytes_per_sec": 0, 00:17:19.253 "w_mbytes_per_sec": 0 00:17:19.253 }, 00:17:19.253 "claimed": false, 00:17:19.253 "zoned": false, 00:17:19.253 "supported_io_types": { 00:17:19.253 "read": true, 00:17:19.253 "write": true, 00:17:19.253 "unmap": false, 00:17:19.253 "flush": false, 00:17:19.253 "reset": true, 00:17:19.253 "nvme_admin": false, 00:17:19.253 "nvme_io": false, 00:17:19.253 "nvme_io_md": false, 00:17:19.253 "write_zeroes": true, 00:17:19.253 "zcopy": false, 00:17:19.253 "get_zone_info": false, 00:17:19.253 "zone_management": false, 00:17:19.253 "zone_append": false, 00:17:19.253 "compare": false, 00:17:19.253 "compare_and_write": false, 00:17:19.253 "abort": false, 00:17:19.253 "seek_hole": false, 00:17:19.253 "seek_data": false, 00:17:19.253 "copy": false, 00:17:19.253 "nvme_iov_md": false 00:17:19.253 }, 00:17:19.253 "memory_domains": [ 00:17:19.253 { 00:17:19.253 "dma_device_id": "system", 00:17:19.253 "dma_device_type": 1 00:17:19.253 }, 00:17:19.253 { 00:17:19.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.253 "dma_device_type": 2 00:17:19.253 }, 00:17:19.253 { 00:17:19.253 "dma_device_id": "system", 00:17:19.253 "dma_device_type": 1 00:17:19.253 }, 00:17:19.253 { 00:17:19.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.253 "dma_device_type": 2 00:17:19.253 } 00:17:19.253 ], 00:17:19.253 "driver_specific": { 00:17:19.253 "raid": { 00:17:19.253 "uuid": "c8a5af2c-730d-495a-a2d8-4f186d5fcf64", 00:17:19.253 "strip_size_kb": 0, 00:17:19.253 "state": "online", 00:17:19.253 "raid_level": "raid1", 00:17:19.253 "superblock": true, 00:17:19.253 "num_base_bdevs": 2, 00:17:19.253 "num_base_bdevs_discovered": 2, 00:17:19.253 "num_base_bdevs_operational": 2, 00:17:19.253 "base_bdevs_list": [ 00:17:19.253 { 00:17:19.253 "name": "pt1", 00:17:19.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.253 "is_configured": true, 00:17:19.253 "data_offset": 256, 00:17:19.253 "data_size": 7936 00:17:19.253 }, 00:17:19.253 { 00:17:19.253 "name": "pt2", 00:17:19.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.253 "is_configured": true, 00:17:19.253 "data_offset": 256, 00:17:19.253 "data_size": 7936 00:17:19.253 } 00:17:19.253 ] 00:17:19.253 } 00:17:19.253 } 00:17:19.253 }' 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:19.253 pt2' 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.253 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:19.254 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:19.254 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.254 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.254 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.254 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:19.254 [2024-11-27 11:55:45.581800] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.254 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.254 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' c8a5af2c-730d-495a-a2d8-4f186d5fcf64 '!=' c8a5af2c-730d-495a-a2d8-4f186d5fcf64 ']' 00:17:19.254 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:19.254 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:19.254 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:19.254 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:19.254 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.254 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.254 [2024-11-27 11:55:45.629530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.514 "name": "raid_bdev1", 00:17:19.514 "uuid": "c8a5af2c-730d-495a-a2d8-4f186d5fcf64", 00:17:19.514 "strip_size_kb": 0, 00:17:19.514 "state": "online", 00:17:19.514 "raid_level": "raid1", 00:17:19.514 "superblock": true, 00:17:19.514 "num_base_bdevs": 2, 00:17:19.514 "num_base_bdevs_discovered": 1, 00:17:19.514 "num_base_bdevs_operational": 1, 00:17:19.514 "base_bdevs_list": [ 00:17:19.514 { 00:17:19.514 "name": null, 00:17:19.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.514 "is_configured": false, 00:17:19.514 "data_offset": 0, 00:17:19.514 "data_size": 7936 00:17:19.514 }, 00:17:19.514 { 00:17:19.514 "name": "pt2", 00:17:19.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.514 "is_configured": true, 00:17:19.514 "data_offset": 256, 00:17:19.514 "data_size": 7936 00:17:19.514 } 00:17:19.514 ] 00:17:19.514 }' 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.514 11:55:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.774 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:19.775 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.775 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.775 [2024-11-27 11:55:46.112650] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.775 [2024-11-27 11:55:46.112746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.775 [2024-11-27 11:55:46.112891] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.775 [2024-11-27 11:55:46.112972] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.775 [2024-11-27 11:55:46.113025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:19.775 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.775 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.775 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.775 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.775 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:19.775 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.035 [2024-11-27 11:55:46.188512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:20.035 [2024-11-27 11:55:46.188578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.035 [2024-11-27 11:55:46.188596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:20.035 [2024-11-27 11:55:46.188607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.035 [2024-11-27 11:55:46.190869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.035 [2024-11-27 11:55:46.190943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:20.035 [2024-11-27 11:55:46.191033] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:20.035 [2024-11-27 11:55:46.191084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:20.035 [2024-11-27 11:55:46.191192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:20.035 [2024-11-27 11:55:46.191204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:20.035 [2024-11-27 11:55:46.191430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:20.035 [2024-11-27 11:55:46.191584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:20.035 [2024-11-27 11:55:46.191594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:20.035 [2024-11-27 11:55:46.191746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.035 pt2 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.035 "name": "raid_bdev1", 00:17:20.035 "uuid": "c8a5af2c-730d-495a-a2d8-4f186d5fcf64", 00:17:20.035 "strip_size_kb": 0, 00:17:20.035 "state": "online", 00:17:20.035 "raid_level": "raid1", 00:17:20.035 "superblock": true, 00:17:20.035 "num_base_bdevs": 2, 00:17:20.035 "num_base_bdevs_discovered": 1, 00:17:20.035 "num_base_bdevs_operational": 1, 00:17:20.035 "base_bdevs_list": [ 00:17:20.035 { 00:17:20.035 "name": null, 00:17:20.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.035 "is_configured": false, 00:17:20.035 "data_offset": 256, 00:17:20.035 "data_size": 7936 00:17:20.035 }, 00:17:20.035 { 00:17:20.035 "name": "pt2", 00:17:20.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.035 "is_configured": true, 00:17:20.035 "data_offset": 256, 00:17:20.035 "data_size": 7936 00:17:20.035 } 00:17:20.035 ] 00:17:20.035 }' 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.035 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.295 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:20.295 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.295 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.295 [2024-11-27 11:55:46.635803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:20.295 [2024-11-27 11:55:46.635904] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:20.295 [2024-11-27 11:55:46.636057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.295 [2024-11-27 11:55:46.636161] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.295 [2024-11-27 11:55:46.636213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:20.295 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.295 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.295 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:20.295 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.295 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.295 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.554 [2024-11-27 11:55:46.691706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:20.554 [2024-11-27 11:55:46.691805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.554 [2024-11-27 11:55:46.691865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:20.554 [2024-11-27 11:55:46.691906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.554 [2024-11-27 11:55:46.694263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.554 [2024-11-27 11:55:46.694298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:20.554 [2024-11-27 11:55:46.694379] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:20.554 [2024-11-27 11:55:46.694427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:20.554 [2024-11-27 11:55:46.694583] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:20.554 [2024-11-27 11:55:46.694595] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:20.554 [2024-11-27 11:55:46.694612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:20.554 [2024-11-27 11:55:46.694681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:20.554 [2024-11-27 11:55:46.694748] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:20.554 [2024-11-27 11:55:46.694757] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:20.554 [2024-11-27 11:55:46.695023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:20.554 [2024-11-27 11:55:46.695172] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:20.554 [2024-11-27 11:55:46.695184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:20.554 [2024-11-27 11:55:46.695338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.554 pt1 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.554 "name": "raid_bdev1", 00:17:20.554 "uuid": "c8a5af2c-730d-495a-a2d8-4f186d5fcf64", 00:17:20.554 "strip_size_kb": 0, 00:17:20.554 "state": "online", 00:17:20.554 "raid_level": "raid1", 00:17:20.554 "superblock": true, 00:17:20.554 "num_base_bdevs": 2, 00:17:20.554 "num_base_bdevs_discovered": 1, 00:17:20.554 "num_base_bdevs_operational": 1, 00:17:20.554 "base_bdevs_list": [ 00:17:20.554 { 00:17:20.554 "name": null, 00:17:20.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.554 "is_configured": false, 00:17:20.554 "data_offset": 256, 00:17:20.554 "data_size": 7936 00:17:20.554 }, 00:17:20.554 { 00:17:20.554 "name": "pt2", 00:17:20.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.554 "is_configured": true, 00:17:20.554 "data_offset": 256, 00:17:20.554 "data_size": 7936 00:17:20.554 } 00:17:20.554 ] 00:17:20.554 }' 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.554 11:55:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.814 11:55:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:20.814 11:55:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:20.814 11:55:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.814 11:55:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.814 11:55:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.076 11:55:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:21.076 11:55:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:21.076 11:55:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:21.076 11:55:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.076 11:55:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.076 [2024-11-27 11:55:47.211062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.076 11:55:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.076 11:55:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' c8a5af2c-730d-495a-a2d8-4f186d5fcf64 '!=' c8a5af2c-730d-495a-a2d8-4f186d5fcf64 ']' 00:17:21.076 11:55:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86251 00:17:21.076 11:55:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86251 ']' 00:17:21.076 11:55:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86251 00:17:21.076 11:55:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:21.076 11:55:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.076 11:55:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86251 00:17:21.076 killing process with pid 86251 00:17:21.076 11:55:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.076 11:55:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.076 11:55:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86251' 00:17:21.076 11:55:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86251 00:17:21.076 [2024-11-27 11:55:47.291824] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.076 [2024-11-27 11:55:47.291959] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.076 [2024-11-27 11:55:47.292010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.076 [2024-11-27 11:55:47.292023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:21.076 11:55:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86251 00:17:21.338 [2024-11-27 11:55:47.500425] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:22.277 11:55:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:22.277 00:17:22.277 real 0m6.166s 00:17:22.277 user 0m9.326s 00:17:22.277 sys 0m1.129s 00:17:22.277 11:55:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.277 11:55:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.277 ************************************ 00:17:22.277 END TEST raid_superblock_test_4k 00:17:22.277 ************************************ 00:17:22.538 11:55:48 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:22.538 11:55:48 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:22.538 11:55:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:22.538 11:55:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.538 11:55:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.538 ************************************ 00:17:22.538 START TEST raid_rebuild_test_sb_4k 00:17:22.538 ************************************ 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:22.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86574 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86574 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86574 ']' 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.538 11:55:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.538 [2024-11-27 11:55:48.797589] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:17:22.538 [2024-11-27 11:55:48.797788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86574 ] 00:17:22.538 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:22.538 Zero copy mechanism will not be used. 00:17:22.798 [2024-11-27 11:55:48.973025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.798 [2024-11-27 11:55:49.084995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.057 [2024-11-27 11:55:49.281363] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.057 [2024-11-27 11:55:49.281513] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.317 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.317 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:23.317 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:23.317 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:23.317 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.317 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.317 BaseBdev1_malloc 00:17:23.317 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.317 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:23.317 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.317 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.317 [2024-11-27 11:55:49.673417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:23.317 [2024-11-27 11:55:49.673518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.317 [2024-11-27 11:55:49.673557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:23.317 [2024-11-27 11:55:49.673591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.317 [2024-11-27 11:55:49.675626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.317 [2024-11-27 11:55:49.675706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:23.317 BaseBdev1 00:17:23.317 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.317 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:23.317 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:23.317 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.317 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.577 BaseBdev2_malloc 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.577 [2024-11-27 11:55:49.730196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:23.577 [2024-11-27 11:55:49.730262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.577 [2024-11-27 11:55:49.730286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:23.577 [2024-11-27 11:55:49.730297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.577 [2024-11-27 11:55:49.732458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.577 [2024-11-27 11:55:49.732498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:23.577 BaseBdev2 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.577 spare_malloc 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.577 spare_delay 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.577 [2024-11-27 11:55:49.807997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:23.577 [2024-11-27 11:55:49.808055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.577 [2024-11-27 11:55:49.808077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:23.577 [2024-11-27 11:55:49.808087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.577 [2024-11-27 11:55:49.810243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.577 [2024-11-27 11:55:49.810287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:23.577 spare 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.577 [2024-11-27 11:55:49.820058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:23.577 [2024-11-27 11:55:49.821820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.577 [2024-11-27 11:55:49.822013] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:23.577 [2024-11-27 11:55:49.822029] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:23.577 [2024-11-27 11:55:49.822264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:23.577 [2024-11-27 11:55:49.822421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:23.577 [2024-11-27 11:55:49.822430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:23.577 [2024-11-27 11:55:49.822591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.577 "name": "raid_bdev1", 00:17:23.577 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:23.577 "strip_size_kb": 0, 00:17:23.577 "state": "online", 00:17:23.577 "raid_level": "raid1", 00:17:23.577 "superblock": true, 00:17:23.577 "num_base_bdevs": 2, 00:17:23.577 "num_base_bdevs_discovered": 2, 00:17:23.577 "num_base_bdevs_operational": 2, 00:17:23.577 "base_bdevs_list": [ 00:17:23.577 { 00:17:23.577 "name": "BaseBdev1", 00:17:23.577 "uuid": "c5f5ce80-5acf-5684-a4d6-01feb0017661", 00:17:23.577 "is_configured": true, 00:17:23.577 "data_offset": 256, 00:17:23.577 "data_size": 7936 00:17:23.577 }, 00:17:23.577 { 00:17:23.577 "name": "BaseBdev2", 00:17:23.577 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:23.577 "is_configured": true, 00:17:23.577 "data_offset": 256, 00:17:23.577 "data_size": 7936 00:17:23.577 } 00:17:23.577 ] 00:17:23.577 }' 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.577 11:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:24.148 [2024-11-27 11:55:50.271569] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:24.148 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:24.407 [2024-11-27 11:55:50.542861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:24.407 /dev/nbd0 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.407 1+0 records in 00:17:24.407 1+0 records out 00:17:24.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368941 s, 11.1 MB/s 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:24.407 11:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:24.976 7936+0 records in 00:17:24.976 7936+0 records out 00:17:24.976 32505856 bytes (33 MB, 31 MiB) copied, 0.631626 s, 51.5 MB/s 00:17:24.976 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:24.976 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.976 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:24.976 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:24.976 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:24.976 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.976 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:25.235 [2024-11-27 11:55:51.447995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.235 [2024-11-27 11:55:51.465243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.235 "name": "raid_bdev1", 00:17:25.235 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:25.235 "strip_size_kb": 0, 00:17:25.235 "state": "online", 00:17:25.235 "raid_level": "raid1", 00:17:25.235 "superblock": true, 00:17:25.235 "num_base_bdevs": 2, 00:17:25.235 "num_base_bdevs_discovered": 1, 00:17:25.235 "num_base_bdevs_operational": 1, 00:17:25.235 "base_bdevs_list": [ 00:17:25.235 { 00:17:25.235 "name": null, 00:17:25.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.235 "is_configured": false, 00:17:25.235 "data_offset": 0, 00:17:25.235 "data_size": 7936 00:17:25.235 }, 00:17:25.235 { 00:17:25.235 "name": "BaseBdev2", 00:17:25.235 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:25.235 "is_configured": true, 00:17:25.235 "data_offset": 256, 00:17:25.235 "data_size": 7936 00:17:25.235 } 00:17:25.235 ] 00:17:25.235 }' 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.235 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.803 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:25.803 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.803 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.803 [2024-11-27 11:55:51.932442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.803 [2024-11-27 11:55:51.951028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:25.803 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.803 11:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:25.803 [2024-11-27 11:55:51.952999] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:26.742 11:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.742 11:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.742 11:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.742 11:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.742 11:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.742 11:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.742 11:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.742 11:55:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.742 11:55:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.742 11:55:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.742 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.742 "name": "raid_bdev1", 00:17:26.742 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:26.742 "strip_size_kb": 0, 00:17:26.742 "state": "online", 00:17:26.742 "raid_level": "raid1", 00:17:26.742 "superblock": true, 00:17:26.742 "num_base_bdevs": 2, 00:17:26.742 "num_base_bdevs_discovered": 2, 00:17:26.742 "num_base_bdevs_operational": 2, 00:17:26.742 "process": { 00:17:26.742 "type": "rebuild", 00:17:26.742 "target": "spare", 00:17:26.742 "progress": { 00:17:26.742 "blocks": 2560, 00:17:26.742 "percent": 32 00:17:26.742 } 00:17:26.742 }, 00:17:26.742 "base_bdevs_list": [ 00:17:26.742 { 00:17:26.742 "name": "spare", 00:17:26.742 "uuid": "98374e6e-3761-5d39-959b-b2cb87be355b", 00:17:26.742 "is_configured": true, 00:17:26.742 "data_offset": 256, 00:17:26.742 "data_size": 7936 00:17:26.742 }, 00:17:26.742 { 00:17:26.742 "name": "BaseBdev2", 00:17:26.742 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:26.742 "is_configured": true, 00:17:26.742 "data_offset": 256, 00:17:26.742 "data_size": 7936 00:17:26.742 } 00:17:26.742 ] 00:17:26.742 }' 00:17:26.742 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.742 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.742 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.742 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.742 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:26.742 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.742 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.742 [2024-11-27 11:55:53.124544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.000 [2024-11-27 11:55:53.158879] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:27.000 [2024-11-27 11:55:53.158979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.000 [2024-11-27 11:55:53.158997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.000 [2024-11-27 11:55:53.159009] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:27.000 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.001 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:27.001 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.001 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.001 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.001 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.001 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:27.001 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.001 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.001 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.001 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.001 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.001 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.001 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.001 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.001 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.001 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.001 "name": "raid_bdev1", 00:17:27.001 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:27.001 "strip_size_kb": 0, 00:17:27.001 "state": "online", 00:17:27.001 "raid_level": "raid1", 00:17:27.001 "superblock": true, 00:17:27.001 "num_base_bdevs": 2, 00:17:27.001 "num_base_bdevs_discovered": 1, 00:17:27.001 "num_base_bdevs_operational": 1, 00:17:27.001 "base_bdevs_list": [ 00:17:27.001 { 00:17:27.001 "name": null, 00:17:27.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.001 "is_configured": false, 00:17:27.001 "data_offset": 0, 00:17:27.001 "data_size": 7936 00:17:27.001 }, 00:17:27.001 { 00:17:27.001 "name": "BaseBdev2", 00:17:27.001 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:27.001 "is_configured": true, 00:17:27.001 "data_offset": 256, 00:17:27.001 "data_size": 7936 00:17:27.001 } 00:17:27.001 ] 00:17:27.001 }' 00:17:27.001 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.001 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.260 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:27.260 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.260 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:27.260 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:27.260 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.260 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.260 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.260 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.260 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.260 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.520 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.520 "name": "raid_bdev1", 00:17:27.520 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:27.520 "strip_size_kb": 0, 00:17:27.520 "state": "online", 00:17:27.520 "raid_level": "raid1", 00:17:27.520 "superblock": true, 00:17:27.520 "num_base_bdevs": 2, 00:17:27.520 "num_base_bdevs_discovered": 1, 00:17:27.520 "num_base_bdevs_operational": 1, 00:17:27.520 "base_bdevs_list": [ 00:17:27.520 { 00:17:27.520 "name": null, 00:17:27.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.520 "is_configured": false, 00:17:27.520 "data_offset": 0, 00:17:27.520 "data_size": 7936 00:17:27.520 }, 00:17:27.520 { 00:17:27.520 "name": "BaseBdev2", 00:17:27.520 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:27.520 "is_configured": true, 00:17:27.520 "data_offset": 256, 00:17:27.520 "data_size": 7936 00:17:27.520 } 00:17:27.520 ] 00:17:27.520 }' 00:17:27.520 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.520 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:27.520 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.520 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:27.520 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:27.520 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.520 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.520 [2024-11-27 11:55:53.771323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.520 [2024-11-27 11:55:53.787247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:27.520 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.520 11:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:27.520 [2024-11-27 11:55:53.789180] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:28.458 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.458 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.458 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.458 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.458 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.458 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.458 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.458 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.458 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.458 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.718 "name": "raid_bdev1", 00:17:28.718 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:28.718 "strip_size_kb": 0, 00:17:28.718 "state": "online", 00:17:28.718 "raid_level": "raid1", 00:17:28.718 "superblock": true, 00:17:28.718 "num_base_bdevs": 2, 00:17:28.718 "num_base_bdevs_discovered": 2, 00:17:28.718 "num_base_bdevs_operational": 2, 00:17:28.718 "process": { 00:17:28.718 "type": "rebuild", 00:17:28.718 "target": "spare", 00:17:28.718 "progress": { 00:17:28.718 "blocks": 2560, 00:17:28.718 "percent": 32 00:17:28.718 } 00:17:28.718 }, 00:17:28.718 "base_bdevs_list": [ 00:17:28.718 { 00:17:28.718 "name": "spare", 00:17:28.718 "uuid": "98374e6e-3761-5d39-959b-b2cb87be355b", 00:17:28.718 "is_configured": true, 00:17:28.718 "data_offset": 256, 00:17:28.718 "data_size": 7936 00:17:28.718 }, 00:17:28.718 { 00:17:28.718 "name": "BaseBdev2", 00:17:28.718 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:28.718 "is_configured": true, 00:17:28.718 "data_offset": 256, 00:17:28.718 "data_size": 7936 00:17:28.718 } 00:17:28.718 ] 00:17:28.718 }' 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:28.718 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=689 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.718 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.718 "name": "raid_bdev1", 00:17:28.718 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:28.718 "strip_size_kb": 0, 00:17:28.718 "state": "online", 00:17:28.718 "raid_level": "raid1", 00:17:28.718 "superblock": true, 00:17:28.718 "num_base_bdevs": 2, 00:17:28.718 "num_base_bdevs_discovered": 2, 00:17:28.719 "num_base_bdevs_operational": 2, 00:17:28.719 "process": { 00:17:28.719 "type": "rebuild", 00:17:28.719 "target": "spare", 00:17:28.719 "progress": { 00:17:28.719 "blocks": 2816, 00:17:28.719 "percent": 35 00:17:28.719 } 00:17:28.719 }, 00:17:28.719 "base_bdevs_list": [ 00:17:28.719 { 00:17:28.719 "name": "spare", 00:17:28.719 "uuid": "98374e6e-3761-5d39-959b-b2cb87be355b", 00:17:28.719 "is_configured": true, 00:17:28.719 "data_offset": 256, 00:17:28.719 "data_size": 7936 00:17:28.719 }, 00:17:28.719 { 00:17:28.719 "name": "BaseBdev2", 00:17:28.719 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:28.719 "is_configured": true, 00:17:28.719 "data_offset": 256, 00:17:28.719 "data_size": 7936 00:17:28.719 } 00:17:28.719 ] 00:17:28.719 }' 00:17:28.719 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.719 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.719 11:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.719 11:55:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.719 11:55:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:30.099 11:55:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.099 11:55:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.099 11:55:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.099 11:55:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.099 11:55:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.099 11:55:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.099 11:55:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.099 11:55:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.099 11:55:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.099 11:55:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.099 11:55:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.099 11:55:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.099 "name": "raid_bdev1", 00:17:30.099 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:30.099 "strip_size_kb": 0, 00:17:30.099 "state": "online", 00:17:30.099 "raid_level": "raid1", 00:17:30.099 "superblock": true, 00:17:30.099 "num_base_bdevs": 2, 00:17:30.099 "num_base_bdevs_discovered": 2, 00:17:30.099 "num_base_bdevs_operational": 2, 00:17:30.099 "process": { 00:17:30.099 "type": "rebuild", 00:17:30.099 "target": "spare", 00:17:30.099 "progress": { 00:17:30.099 "blocks": 5632, 00:17:30.099 "percent": 70 00:17:30.099 } 00:17:30.100 }, 00:17:30.100 "base_bdevs_list": [ 00:17:30.100 { 00:17:30.100 "name": "spare", 00:17:30.100 "uuid": "98374e6e-3761-5d39-959b-b2cb87be355b", 00:17:30.100 "is_configured": true, 00:17:30.100 "data_offset": 256, 00:17:30.100 "data_size": 7936 00:17:30.100 }, 00:17:30.100 { 00:17:30.100 "name": "BaseBdev2", 00:17:30.100 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:30.100 "is_configured": true, 00:17:30.100 "data_offset": 256, 00:17:30.100 "data_size": 7936 00:17:30.100 } 00:17:30.100 ] 00:17:30.100 }' 00:17:30.100 11:55:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.100 11:55:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.100 11:55:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.100 11:55:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.100 11:55:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:30.669 [2024-11-27 11:55:56.903778] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:30.669 [2024-11-27 11:55:56.903975] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:30.669 [2024-11-27 11:55:56.904114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.930 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.930 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.930 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.930 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.930 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.930 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.930 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.930 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.930 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.930 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.930 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.930 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.930 "name": "raid_bdev1", 00:17:30.930 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:30.930 "strip_size_kb": 0, 00:17:30.930 "state": "online", 00:17:30.930 "raid_level": "raid1", 00:17:30.930 "superblock": true, 00:17:30.930 "num_base_bdevs": 2, 00:17:30.930 "num_base_bdevs_discovered": 2, 00:17:30.930 "num_base_bdevs_operational": 2, 00:17:30.930 "base_bdevs_list": [ 00:17:30.931 { 00:17:30.931 "name": "spare", 00:17:30.931 "uuid": "98374e6e-3761-5d39-959b-b2cb87be355b", 00:17:30.931 "is_configured": true, 00:17:30.931 "data_offset": 256, 00:17:30.931 "data_size": 7936 00:17:30.931 }, 00:17:30.931 { 00:17:30.931 "name": "BaseBdev2", 00:17:30.931 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:30.931 "is_configured": true, 00:17:30.931 "data_offset": 256, 00:17:30.931 "data_size": 7936 00:17:30.931 } 00:17:30.931 ] 00:17:30.931 }' 00:17:30.931 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.931 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:30.931 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.191 "name": "raid_bdev1", 00:17:31.191 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:31.191 "strip_size_kb": 0, 00:17:31.191 "state": "online", 00:17:31.191 "raid_level": "raid1", 00:17:31.191 "superblock": true, 00:17:31.191 "num_base_bdevs": 2, 00:17:31.191 "num_base_bdevs_discovered": 2, 00:17:31.191 "num_base_bdevs_operational": 2, 00:17:31.191 "base_bdevs_list": [ 00:17:31.191 { 00:17:31.191 "name": "spare", 00:17:31.191 "uuid": "98374e6e-3761-5d39-959b-b2cb87be355b", 00:17:31.191 "is_configured": true, 00:17:31.191 "data_offset": 256, 00:17:31.191 "data_size": 7936 00:17:31.191 }, 00:17:31.191 { 00:17:31.191 "name": "BaseBdev2", 00:17:31.191 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:31.191 "is_configured": true, 00:17:31.191 "data_offset": 256, 00:17:31.191 "data_size": 7936 00:17:31.191 } 00:17:31.191 ] 00:17:31.191 }' 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.191 "name": "raid_bdev1", 00:17:31.191 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:31.191 "strip_size_kb": 0, 00:17:31.191 "state": "online", 00:17:31.191 "raid_level": "raid1", 00:17:31.191 "superblock": true, 00:17:31.191 "num_base_bdevs": 2, 00:17:31.191 "num_base_bdevs_discovered": 2, 00:17:31.191 "num_base_bdevs_operational": 2, 00:17:31.191 "base_bdevs_list": [ 00:17:31.191 { 00:17:31.191 "name": "spare", 00:17:31.191 "uuid": "98374e6e-3761-5d39-959b-b2cb87be355b", 00:17:31.191 "is_configured": true, 00:17:31.191 "data_offset": 256, 00:17:31.191 "data_size": 7936 00:17:31.191 }, 00:17:31.191 { 00:17:31.191 "name": "BaseBdev2", 00:17:31.191 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:31.191 "is_configured": true, 00:17:31.191 "data_offset": 256, 00:17:31.191 "data_size": 7936 00:17:31.191 } 00:17:31.191 ] 00:17:31.191 }' 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.191 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.788 [2024-11-27 11:55:57.927979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.788 [2024-11-27 11:55:57.928073] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.788 [2024-11-27 11:55:57.928185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.788 [2024-11-27 11:55:57.928292] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.788 [2024-11-27 11:55:57.928341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:31.788 11:55:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:32.047 /dev/nbd0 00:17:32.047 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:32.047 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:32.048 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:32.048 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:32.048 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:32.048 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:32.048 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:32.048 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:32.048 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:32.048 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:32.048 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:32.048 1+0 records in 00:17:32.048 1+0 records out 00:17:32.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385084 s, 10.6 MB/s 00:17:32.048 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.048 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:32.048 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.048 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:32.048 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:32.048 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:32.048 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.048 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:32.308 /dev/nbd1 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:32.308 1+0 records in 00:17:32.308 1+0 records out 00:17:32.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392424 s, 10.4 MB/s 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.308 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:32.567 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:32.567 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:32.567 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:32.567 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.567 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.567 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:32.568 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:32.568 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.568 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.568 11:55:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:32.827 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:32.827 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:32.827 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:32.827 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.827 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.827 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:32.827 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:32.827 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.827 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:32.827 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:32.827 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.828 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.828 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.828 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:32.828 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.828 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.828 [2024-11-27 11:55:59.116175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:32.828 [2024-11-27 11:55:59.116236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.828 [2024-11-27 11:55:59.116263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:32.828 [2024-11-27 11:55:59.116272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.828 [2024-11-27 11:55:59.118649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.828 [2024-11-27 11:55:59.118727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:32.828 [2024-11-27 11:55:59.118908] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:32.828 [2024-11-27 11:55:59.118991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:32.828 [2024-11-27 11:55:59.119174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:32.828 spare 00:17:32.828 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.828 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:32.828 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.828 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.100 [2024-11-27 11:55:59.219124] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:33.100 [2024-11-27 11:55:59.219176] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:33.100 [2024-11-27 11:55:59.219527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:33.100 [2024-11-27 11:55:59.219737] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:33.100 [2024-11-27 11:55:59.219747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:33.100 [2024-11-27 11:55:59.220037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.100 "name": "raid_bdev1", 00:17:33.100 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:33.100 "strip_size_kb": 0, 00:17:33.100 "state": "online", 00:17:33.100 "raid_level": "raid1", 00:17:33.100 "superblock": true, 00:17:33.100 "num_base_bdevs": 2, 00:17:33.100 "num_base_bdevs_discovered": 2, 00:17:33.100 "num_base_bdevs_operational": 2, 00:17:33.100 "base_bdevs_list": [ 00:17:33.100 { 00:17:33.100 "name": "spare", 00:17:33.100 "uuid": "98374e6e-3761-5d39-959b-b2cb87be355b", 00:17:33.100 "is_configured": true, 00:17:33.100 "data_offset": 256, 00:17:33.100 "data_size": 7936 00:17:33.100 }, 00:17:33.100 { 00:17:33.100 "name": "BaseBdev2", 00:17:33.100 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:33.100 "is_configured": true, 00:17:33.100 "data_offset": 256, 00:17:33.100 "data_size": 7936 00:17:33.100 } 00:17:33.100 ] 00:17:33.100 }' 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.100 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.359 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:33.359 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.359 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:33.359 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:33.359 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.359 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.359 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.359 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.359 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.359 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.619 "name": "raid_bdev1", 00:17:33.619 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:33.619 "strip_size_kb": 0, 00:17:33.619 "state": "online", 00:17:33.619 "raid_level": "raid1", 00:17:33.619 "superblock": true, 00:17:33.619 "num_base_bdevs": 2, 00:17:33.619 "num_base_bdevs_discovered": 2, 00:17:33.619 "num_base_bdevs_operational": 2, 00:17:33.619 "base_bdevs_list": [ 00:17:33.619 { 00:17:33.619 "name": "spare", 00:17:33.619 "uuid": "98374e6e-3761-5d39-959b-b2cb87be355b", 00:17:33.619 "is_configured": true, 00:17:33.619 "data_offset": 256, 00:17:33.619 "data_size": 7936 00:17:33.619 }, 00:17:33.619 { 00:17:33.619 "name": "BaseBdev2", 00:17:33.619 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:33.619 "is_configured": true, 00:17:33.619 "data_offset": 256, 00:17:33.619 "data_size": 7936 00:17:33.619 } 00:17:33.619 ] 00:17:33.619 }' 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.619 [2024-11-27 11:55:59.894996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.619 "name": "raid_bdev1", 00:17:33.619 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:33.619 "strip_size_kb": 0, 00:17:33.619 "state": "online", 00:17:33.619 "raid_level": "raid1", 00:17:33.619 "superblock": true, 00:17:33.619 "num_base_bdevs": 2, 00:17:33.619 "num_base_bdevs_discovered": 1, 00:17:33.619 "num_base_bdevs_operational": 1, 00:17:33.619 "base_bdevs_list": [ 00:17:33.619 { 00:17:33.619 "name": null, 00:17:33.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.619 "is_configured": false, 00:17:33.619 "data_offset": 0, 00:17:33.619 "data_size": 7936 00:17:33.619 }, 00:17:33.619 { 00:17:33.619 "name": "BaseBdev2", 00:17:33.619 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:33.619 "is_configured": true, 00:17:33.619 "data_offset": 256, 00:17:33.619 "data_size": 7936 00:17:33.619 } 00:17:33.619 ] 00:17:33.619 }' 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.619 11:55:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.190 11:56:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:34.190 11:56:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.190 11:56:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.190 [2024-11-27 11:56:00.378206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.190 [2024-11-27 11:56:00.378414] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:34.190 [2024-11-27 11:56:00.378428] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:34.190 [2024-11-27 11:56:00.378465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.190 [2024-11-27 11:56:00.394433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:34.190 11:56:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.190 11:56:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:34.190 [2024-11-27 11:56:00.396271] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:35.129 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.129 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.129 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.129 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.129 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.129 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.129 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.129 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.129 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.129 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.129 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.129 "name": "raid_bdev1", 00:17:35.129 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:35.129 "strip_size_kb": 0, 00:17:35.129 "state": "online", 00:17:35.129 "raid_level": "raid1", 00:17:35.129 "superblock": true, 00:17:35.129 "num_base_bdevs": 2, 00:17:35.129 "num_base_bdevs_discovered": 2, 00:17:35.129 "num_base_bdevs_operational": 2, 00:17:35.129 "process": { 00:17:35.129 "type": "rebuild", 00:17:35.129 "target": "spare", 00:17:35.129 "progress": { 00:17:35.129 "blocks": 2560, 00:17:35.129 "percent": 32 00:17:35.129 } 00:17:35.129 }, 00:17:35.129 "base_bdevs_list": [ 00:17:35.129 { 00:17:35.129 "name": "spare", 00:17:35.129 "uuid": "98374e6e-3761-5d39-959b-b2cb87be355b", 00:17:35.129 "is_configured": true, 00:17:35.129 "data_offset": 256, 00:17:35.129 "data_size": 7936 00:17:35.129 }, 00:17:35.129 { 00:17:35.129 "name": "BaseBdev2", 00:17:35.129 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:35.129 "is_configured": true, 00:17:35.129 "data_offset": 256, 00:17:35.129 "data_size": 7936 00:17:35.129 } 00:17:35.129 ] 00:17:35.129 }' 00:17:35.129 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.129 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.129 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.388 [2024-11-27 11:56:01.536374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:35.388 [2024-11-27 11:56:01.602032] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:35.388 [2024-11-27 11:56:01.602117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.388 [2024-11-27 11:56:01.602132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:35.388 [2024-11-27 11:56:01.602142] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.388 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.388 "name": "raid_bdev1", 00:17:35.388 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:35.388 "strip_size_kb": 0, 00:17:35.388 "state": "online", 00:17:35.388 "raid_level": "raid1", 00:17:35.388 "superblock": true, 00:17:35.388 "num_base_bdevs": 2, 00:17:35.388 "num_base_bdevs_discovered": 1, 00:17:35.388 "num_base_bdevs_operational": 1, 00:17:35.388 "base_bdevs_list": [ 00:17:35.388 { 00:17:35.388 "name": null, 00:17:35.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.388 "is_configured": false, 00:17:35.388 "data_offset": 0, 00:17:35.389 "data_size": 7936 00:17:35.389 }, 00:17:35.389 { 00:17:35.389 "name": "BaseBdev2", 00:17:35.389 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:35.389 "is_configured": true, 00:17:35.389 "data_offset": 256, 00:17:35.389 "data_size": 7936 00:17:35.389 } 00:17:35.389 ] 00:17:35.389 }' 00:17:35.389 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.389 11:56:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.957 11:56:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:35.957 11:56:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.957 11:56:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.957 [2024-11-27 11:56:02.088446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:35.957 [2024-11-27 11:56:02.088578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.957 [2024-11-27 11:56:02.088626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:35.957 [2024-11-27 11:56:02.088659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.957 [2024-11-27 11:56:02.089182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.957 [2024-11-27 11:56:02.089244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:35.957 [2024-11-27 11:56:02.089371] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:35.957 [2024-11-27 11:56:02.089415] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:35.957 [2024-11-27 11:56:02.089461] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:35.957 [2024-11-27 11:56:02.089538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.957 [2024-11-27 11:56:02.105382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:35.957 spare 00:17:35.957 11:56:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.957 11:56:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:35.957 [2024-11-27 11:56:02.107322] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:36.896 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.896 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.896 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.896 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.896 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.896 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.896 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.896 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.896 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.896 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.896 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.896 "name": "raid_bdev1", 00:17:36.896 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:36.896 "strip_size_kb": 0, 00:17:36.896 "state": "online", 00:17:36.896 "raid_level": "raid1", 00:17:36.896 "superblock": true, 00:17:36.896 "num_base_bdevs": 2, 00:17:36.896 "num_base_bdevs_discovered": 2, 00:17:36.896 "num_base_bdevs_operational": 2, 00:17:36.896 "process": { 00:17:36.896 "type": "rebuild", 00:17:36.896 "target": "spare", 00:17:36.896 "progress": { 00:17:36.896 "blocks": 2560, 00:17:36.896 "percent": 32 00:17:36.896 } 00:17:36.896 }, 00:17:36.896 "base_bdevs_list": [ 00:17:36.896 { 00:17:36.896 "name": "spare", 00:17:36.896 "uuid": "98374e6e-3761-5d39-959b-b2cb87be355b", 00:17:36.896 "is_configured": true, 00:17:36.896 "data_offset": 256, 00:17:36.896 "data_size": 7936 00:17:36.896 }, 00:17:36.896 { 00:17:36.896 "name": "BaseBdev2", 00:17:36.896 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:36.896 "is_configured": true, 00:17:36.896 "data_offset": 256, 00:17:36.896 "data_size": 7936 00:17:36.896 } 00:17:36.896 ] 00:17:36.896 }' 00:17:36.896 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.896 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.896 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.896 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.896 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:36.896 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.896 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.896 [2024-11-27 11:56:03.274597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.156 [2024-11-27 11:56:03.312887] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:37.156 [2024-11-27 11:56:03.312974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.156 [2024-11-27 11:56:03.312993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.156 [2024-11-27 11:56:03.313001] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:37.156 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.156 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:37.156 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.156 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.156 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.156 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.156 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:37.156 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.156 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.156 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.156 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.156 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.156 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.156 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.156 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.156 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.156 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.156 "name": "raid_bdev1", 00:17:37.156 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:37.156 "strip_size_kb": 0, 00:17:37.156 "state": "online", 00:17:37.156 "raid_level": "raid1", 00:17:37.156 "superblock": true, 00:17:37.156 "num_base_bdevs": 2, 00:17:37.156 "num_base_bdevs_discovered": 1, 00:17:37.156 "num_base_bdevs_operational": 1, 00:17:37.156 "base_bdevs_list": [ 00:17:37.156 { 00:17:37.157 "name": null, 00:17:37.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.157 "is_configured": false, 00:17:37.157 "data_offset": 0, 00:17:37.157 "data_size": 7936 00:17:37.157 }, 00:17:37.157 { 00:17:37.157 "name": "BaseBdev2", 00:17:37.157 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:37.157 "is_configured": true, 00:17:37.157 "data_offset": 256, 00:17:37.157 "data_size": 7936 00:17:37.157 } 00:17:37.157 ] 00:17:37.157 }' 00:17:37.157 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.157 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.416 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:37.416 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.416 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:37.416 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:37.416 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.416 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.416 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.416 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.416 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.416 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.416 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.416 "name": "raid_bdev1", 00:17:37.416 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:37.416 "strip_size_kb": 0, 00:17:37.416 "state": "online", 00:17:37.416 "raid_level": "raid1", 00:17:37.416 "superblock": true, 00:17:37.416 "num_base_bdevs": 2, 00:17:37.416 "num_base_bdevs_discovered": 1, 00:17:37.416 "num_base_bdevs_operational": 1, 00:17:37.416 "base_bdevs_list": [ 00:17:37.417 { 00:17:37.417 "name": null, 00:17:37.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.417 "is_configured": false, 00:17:37.417 "data_offset": 0, 00:17:37.417 "data_size": 7936 00:17:37.417 }, 00:17:37.417 { 00:17:37.417 "name": "BaseBdev2", 00:17:37.417 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:37.417 "is_configured": true, 00:17:37.417 "data_offset": 256, 00:17:37.417 "data_size": 7936 00:17:37.417 } 00:17:37.417 ] 00:17:37.417 }' 00:17:37.417 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.676 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:37.676 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.676 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:37.676 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:37.676 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.676 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.676 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.676 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:37.676 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.676 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.676 [2024-11-27 11:56:03.903423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:37.676 [2024-11-27 11:56:03.903546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.676 [2024-11-27 11:56:03.903582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:37.676 [2024-11-27 11:56:03.903603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.676 [2024-11-27 11:56:03.904097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.676 [2024-11-27 11:56:03.904117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:37.677 [2024-11-27 11:56:03.904201] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:37.677 [2024-11-27 11:56:03.904215] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:37.677 [2024-11-27 11:56:03.904224] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:37.677 [2024-11-27 11:56:03.904235] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:37.677 BaseBdev1 00:17:37.677 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.677 11:56:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:38.618 11:56:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:38.618 11:56:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.618 11:56:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.618 11:56:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.618 11:56:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.618 11:56:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:38.618 11:56:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.618 11:56:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.618 11:56:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.618 11:56:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.618 11:56:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.618 11:56:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.618 11:56:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.618 11:56:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.618 11:56:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.618 11:56:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.618 "name": "raid_bdev1", 00:17:38.618 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:38.618 "strip_size_kb": 0, 00:17:38.618 "state": "online", 00:17:38.618 "raid_level": "raid1", 00:17:38.618 "superblock": true, 00:17:38.618 "num_base_bdevs": 2, 00:17:38.618 "num_base_bdevs_discovered": 1, 00:17:38.618 "num_base_bdevs_operational": 1, 00:17:38.618 "base_bdevs_list": [ 00:17:38.618 { 00:17:38.618 "name": null, 00:17:38.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.618 "is_configured": false, 00:17:38.618 "data_offset": 0, 00:17:38.618 "data_size": 7936 00:17:38.618 }, 00:17:38.618 { 00:17:38.618 "name": "BaseBdev2", 00:17:38.618 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:38.618 "is_configured": true, 00:17:38.618 "data_offset": 256, 00:17:38.618 "data_size": 7936 00:17:38.618 } 00:17:38.618 ] 00:17:38.618 }' 00:17:38.618 11:56:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.618 11:56:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.187 "name": "raid_bdev1", 00:17:39.187 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:39.187 "strip_size_kb": 0, 00:17:39.187 "state": "online", 00:17:39.187 "raid_level": "raid1", 00:17:39.187 "superblock": true, 00:17:39.187 "num_base_bdevs": 2, 00:17:39.187 "num_base_bdevs_discovered": 1, 00:17:39.187 "num_base_bdevs_operational": 1, 00:17:39.187 "base_bdevs_list": [ 00:17:39.187 { 00:17:39.187 "name": null, 00:17:39.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.187 "is_configured": false, 00:17:39.187 "data_offset": 0, 00:17:39.187 "data_size": 7936 00:17:39.187 }, 00:17:39.187 { 00:17:39.187 "name": "BaseBdev2", 00:17:39.187 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:39.187 "is_configured": true, 00:17:39.187 "data_offset": 256, 00:17:39.187 "data_size": 7936 00:17:39.187 } 00:17:39.187 ] 00:17:39.187 }' 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.187 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:39.188 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.188 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.188 [2024-11-27 11:56:05.552700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:39.188 [2024-11-27 11:56:05.552966] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:39.188 [2024-11-27 11:56:05.553032] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:39.188 request: 00:17:39.188 { 00:17:39.188 "base_bdev": "BaseBdev1", 00:17:39.188 "raid_bdev": "raid_bdev1", 00:17:39.188 "method": "bdev_raid_add_base_bdev", 00:17:39.188 "req_id": 1 00:17:39.188 } 00:17:39.188 Got JSON-RPC error response 00:17:39.188 response: 00:17:39.188 { 00:17:39.188 "code": -22, 00:17:39.188 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:39.188 } 00:17:39.188 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:39.188 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:39.188 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:39.188 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:39.188 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:39.188 11:56:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:40.569 11:56:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:40.569 11:56:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.569 11:56:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.569 11:56:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.569 11:56:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.569 11:56:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:40.569 11:56:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.569 11:56:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.569 11:56:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.569 11:56:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.569 11:56:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.569 11:56:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.569 11:56:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.569 11:56:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.569 11:56:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.569 11:56:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.569 "name": "raid_bdev1", 00:17:40.569 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:40.569 "strip_size_kb": 0, 00:17:40.569 "state": "online", 00:17:40.569 "raid_level": "raid1", 00:17:40.569 "superblock": true, 00:17:40.569 "num_base_bdevs": 2, 00:17:40.569 "num_base_bdevs_discovered": 1, 00:17:40.569 "num_base_bdevs_operational": 1, 00:17:40.569 "base_bdevs_list": [ 00:17:40.569 { 00:17:40.569 "name": null, 00:17:40.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.569 "is_configured": false, 00:17:40.569 "data_offset": 0, 00:17:40.569 "data_size": 7936 00:17:40.569 }, 00:17:40.569 { 00:17:40.569 "name": "BaseBdev2", 00:17:40.569 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:40.569 "is_configured": true, 00:17:40.570 "data_offset": 256, 00:17:40.570 "data_size": 7936 00:17:40.570 } 00:17:40.570 ] 00:17:40.570 }' 00:17:40.570 11:56:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.570 11:56:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.830 "name": "raid_bdev1", 00:17:40.830 "uuid": "8e44e8de-8cbf-4e59-a429-3219e11fbc42", 00:17:40.830 "strip_size_kb": 0, 00:17:40.830 "state": "online", 00:17:40.830 "raid_level": "raid1", 00:17:40.830 "superblock": true, 00:17:40.830 "num_base_bdevs": 2, 00:17:40.830 "num_base_bdevs_discovered": 1, 00:17:40.830 "num_base_bdevs_operational": 1, 00:17:40.830 "base_bdevs_list": [ 00:17:40.830 { 00:17:40.830 "name": null, 00:17:40.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.830 "is_configured": false, 00:17:40.830 "data_offset": 0, 00:17:40.830 "data_size": 7936 00:17:40.830 }, 00:17:40.830 { 00:17:40.830 "name": "BaseBdev2", 00:17:40.830 "uuid": "e0c6d374-6ddc-51fd-a80f-59f7ac4bf29f", 00:17:40.830 "is_configured": true, 00:17:40.830 "data_offset": 256, 00:17:40.830 "data_size": 7936 00:17:40.830 } 00:17:40.830 ] 00:17:40.830 }' 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86574 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86574 ']' 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86574 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86574 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86574' 00:17:40.830 killing process with pid 86574 00:17:40.830 Received shutdown signal, test time was about 60.000000 seconds 00:17:40.830 00:17:40.830 Latency(us) 00:17:40.830 [2024-11-27T11:56:07.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.830 [2024-11-27T11:56:07.215Z] =================================================================================================================== 00:17:40.830 [2024-11-27T11:56:07.215Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86574 00:17:40.830 [2024-11-27 11:56:07.197435] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:40.830 [2024-11-27 11:56:07.197574] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.830 11:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86574 00:17:40.830 [2024-11-27 11:56:07.197630] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:40.830 [2024-11-27 11:56:07.197643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:41.400 [2024-11-27 11:56:07.503898] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:42.339 11:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:42.339 00:17:42.339 real 0m19.936s 00:17:42.339 user 0m26.108s 00:17:42.339 sys 0m2.564s 00:17:42.339 11:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:42.339 11:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.339 ************************************ 00:17:42.339 END TEST raid_rebuild_test_sb_4k 00:17:42.339 ************************************ 00:17:42.339 11:56:08 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:42.340 11:56:08 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:42.340 11:56:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:42.340 11:56:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:42.340 11:56:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:42.340 ************************************ 00:17:42.340 START TEST raid_state_function_test_sb_md_separate 00:17:42.340 ************************************ 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87265 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87265' 00:17:42.340 Process raid pid: 87265 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87265 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87265 ']' 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.340 11:56:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.599 [2024-11-27 11:56:08.801314] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:17:42.599 [2024-11-27 11:56:08.801510] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.599 [2024-11-27 11:56:08.973258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.857 [2024-11-27 11:56:09.090176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.115 [2024-11-27 11:56:09.292990] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:43.115 [2024-11-27 11:56:09.293040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.374 [2024-11-27 11:56:09.654945] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:43.374 [2024-11-27 11:56:09.655013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:43.374 [2024-11-27 11:56:09.655024] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:43.374 [2024-11-27 11:56:09.655034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.374 "name": "Existed_Raid", 00:17:43.374 "uuid": "399557fd-ec85-4a1e-b13c-2fbc06d882f5", 00:17:43.374 "strip_size_kb": 0, 00:17:43.374 "state": "configuring", 00:17:43.374 "raid_level": "raid1", 00:17:43.374 "superblock": true, 00:17:43.374 "num_base_bdevs": 2, 00:17:43.374 "num_base_bdevs_discovered": 0, 00:17:43.374 "num_base_bdevs_operational": 2, 00:17:43.374 "base_bdevs_list": [ 00:17:43.374 { 00:17:43.374 "name": "BaseBdev1", 00:17:43.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.374 "is_configured": false, 00:17:43.374 "data_offset": 0, 00:17:43.374 "data_size": 0 00:17:43.374 }, 00:17:43.374 { 00:17:43.374 "name": "BaseBdev2", 00:17:43.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.374 "is_configured": false, 00:17:43.374 "data_offset": 0, 00:17:43.374 "data_size": 0 00:17:43.374 } 00:17:43.374 ] 00:17:43.374 }' 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.374 11:56:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.943 [2024-11-27 11:56:10.078118] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:43.943 [2024-11-27 11:56:10.078219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.943 [2024-11-27 11:56:10.090084] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:43.943 [2024-11-27 11:56:10.090168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:43.943 [2024-11-27 11:56:10.090201] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:43.943 [2024-11-27 11:56:10.090226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.943 [2024-11-27 11:56:10.139547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:43.943 BaseBdev1 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.943 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.943 [ 00:17:43.943 { 00:17:43.943 "name": "BaseBdev1", 00:17:43.943 "aliases": [ 00:17:43.943 "945c7fb1-e4a1-4467-8576-7c1fb2a9cc08" 00:17:43.943 ], 00:17:43.943 "product_name": "Malloc disk", 00:17:43.943 "block_size": 4096, 00:17:43.943 "num_blocks": 8192, 00:17:43.943 "uuid": "945c7fb1-e4a1-4467-8576-7c1fb2a9cc08", 00:17:43.943 "md_size": 32, 00:17:43.943 "md_interleave": false, 00:17:43.943 "dif_type": 0, 00:17:43.943 "assigned_rate_limits": { 00:17:43.943 "rw_ios_per_sec": 0, 00:17:43.943 "rw_mbytes_per_sec": 0, 00:17:43.943 "r_mbytes_per_sec": 0, 00:17:43.943 "w_mbytes_per_sec": 0 00:17:43.943 }, 00:17:43.943 "claimed": true, 00:17:43.943 "claim_type": "exclusive_write", 00:17:43.943 "zoned": false, 00:17:43.943 "supported_io_types": { 00:17:43.943 "read": true, 00:17:43.943 "write": true, 00:17:43.943 "unmap": true, 00:17:43.943 "flush": true, 00:17:43.943 "reset": true, 00:17:43.943 "nvme_admin": false, 00:17:43.943 "nvme_io": false, 00:17:43.943 "nvme_io_md": false, 00:17:43.943 "write_zeroes": true, 00:17:43.943 "zcopy": true, 00:17:43.943 "get_zone_info": false, 00:17:43.943 "zone_management": false, 00:17:43.943 "zone_append": false, 00:17:43.943 "compare": false, 00:17:43.943 "compare_and_write": false, 00:17:43.943 "abort": true, 00:17:43.943 "seek_hole": false, 00:17:43.943 "seek_data": false, 00:17:43.943 "copy": true, 00:17:43.943 "nvme_iov_md": false 00:17:43.943 }, 00:17:43.943 "memory_domains": [ 00:17:43.943 { 00:17:43.943 "dma_device_id": "system", 00:17:43.943 "dma_device_type": 1 00:17:43.943 }, 00:17:43.943 { 00:17:43.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.944 "dma_device_type": 2 00:17:43.944 } 00:17:43.944 ], 00:17:43.944 "driver_specific": {} 00:17:43.944 } 00:17:43.944 ] 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.944 "name": "Existed_Raid", 00:17:43.944 "uuid": "6d86100d-b005-47da-9f71-7f9f9595a89e", 00:17:43.944 "strip_size_kb": 0, 00:17:43.944 "state": "configuring", 00:17:43.944 "raid_level": "raid1", 00:17:43.944 "superblock": true, 00:17:43.944 "num_base_bdevs": 2, 00:17:43.944 "num_base_bdevs_discovered": 1, 00:17:43.944 "num_base_bdevs_operational": 2, 00:17:43.944 "base_bdevs_list": [ 00:17:43.944 { 00:17:43.944 "name": "BaseBdev1", 00:17:43.944 "uuid": "945c7fb1-e4a1-4467-8576-7c1fb2a9cc08", 00:17:43.944 "is_configured": true, 00:17:43.944 "data_offset": 256, 00:17:43.944 "data_size": 7936 00:17:43.944 }, 00:17:43.944 { 00:17:43.944 "name": "BaseBdev2", 00:17:43.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.944 "is_configured": false, 00:17:43.944 "data_offset": 0, 00:17:43.944 "data_size": 0 00:17:43.944 } 00:17:43.944 ] 00:17:43.944 }' 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.944 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.513 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:44.513 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.513 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.513 [2024-11-27 11:56:10.614855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:44.513 [2024-11-27 11:56:10.614911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:44.513 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.513 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:44.513 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.513 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.513 [2024-11-27 11:56:10.626876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.513 [2024-11-27 11:56:10.628762] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.513 [2024-11-27 11:56:10.628864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.513 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.513 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:44.513 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:44.513 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:44.513 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.513 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.513 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.513 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.513 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.514 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.514 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.514 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.514 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.514 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.514 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.514 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.514 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.514 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.514 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.514 "name": "Existed_Raid", 00:17:44.514 "uuid": "81515169-c68e-4267-a222-f977234cd0c5", 00:17:44.514 "strip_size_kb": 0, 00:17:44.514 "state": "configuring", 00:17:44.514 "raid_level": "raid1", 00:17:44.514 "superblock": true, 00:17:44.514 "num_base_bdevs": 2, 00:17:44.514 "num_base_bdevs_discovered": 1, 00:17:44.514 "num_base_bdevs_operational": 2, 00:17:44.514 "base_bdevs_list": [ 00:17:44.514 { 00:17:44.514 "name": "BaseBdev1", 00:17:44.514 "uuid": "945c7fb1-e4a1-4467-8576-7c1fb2a9cc08", 00:17:44.514 "is_configured": true, 00:17:44.514 "data_offset": 256, 00:17:44.514 "data_size": 7936 00:17:44.514 }, 00:17:44.514 { 00:17:44.514 "name": "BaseBdev2", 00:17:44.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.514 "is_configured": false, 00:17:44.514 "data_offset": 0, 00:17:44.514 "data_size": 0 00:17:44.514 } 00:17:44.514 ] 00:17:44.514 }' 00:17:44.514 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.514 11:56:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.773 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:44.773 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.773 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.773 [2024-11-27 11:56:11.137003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.773 [2024-11-27 11:56:11.137356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:44.773 [2024-11-27 11:56:11.137426] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:44.773 [2024-11-27 11:56:11.137525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:44.773 [2024-11-27 11:56:11.137680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:44.774 [2024-11-27 11:56:11.137725] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:44.774 BaseBdev2 00:17:44.774 [2024-11-27 11:56:11.137884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.774 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.774 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:44.774 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:44.774 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:44.774 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:44.774 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:44.774 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:44.774 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:44.774 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.774 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.774 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.774 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:44.774 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.774 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.035 [ 00:17:45.035 { 00:17:45.035 "name": "BaseBdev2", 00:17:45.035 "aliases": [ 00:17:45.035 "5864aed4-d891-433a-ba43-890f2109da63" 00:17:45.035 ], 00:17:45.035 "product_name": "Malloc disk", 00:17:45.035 "block_size": 4096, 00:17:45.035 "num_blocks": 8192, 00:17:45.035 "uuid": "5864aed4-d891-433a-ba43-890f2109da63", 00:17:45.035 "md_size": 32, 00:17:45.035 "md_interleave": false, 00:17:45.035 "dif_type": 0, 00:17:45.035 "assigned_rate_limits": { 00:17:45.035 "rw_ios_per_sec": 0, 00:17:45.035 "rw_mbytes_per_sec": 0, 00:17:45.035 "r_mbytes_per_sec": 0, 00:17:45.035 "w_mbytes_per_sec": 0 00:17:45.035 }, 00:17:45.035 "claimed": true, 00:17:45.035 "claim_type": "exclusive_write", 00:17:45.035 "zoned": false, 00:17:45.035 "supported_io_types": { 00:17:45.035 "read": true, 00:17:45.035 "write": true, 00:17:45.035 "unmap": true, 00:17:45.035 "flush": true, 00:17:45.035 "reset": true, 00:17:45.035 "nvme_admin": false, 00:17:45.035 "nvme_io": false, 00:17:45.035 "nvme_io_md": false, 00:17:45.035 "write_zeroes": true, 00:17:45.035 "zcopy": true, 00:17:45.035 "get_zone_info": false, 00:17:45.035 "zone_management": false, 00:17:45.035 "zone_append": false, 00:17:45.035 "compare": false, 00:17:45.035 "compare_and_write": false, 00:17:45.035 "abort": true, 00:17:45.035 "seek_hole": false, 00:17:45.035 "seek_data": false, 00:17:45.035 "copy": true, 00:17:45.035 "nvme_iov_md": false 00:17:45.035 }, 00:17:45.036 "memory_domains": [ 00:17:45.036 { 00:17:45.036 "dma_device_id": "system", 00:17:45.036 "dma_device_type": 1 00:17:45.036 }, 00:17:45.036 { 00:17:45.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.036 "dma_device_type": 2 00:17:45.036 } 00:17:45.036 ], 00:17:45.036 "driver_specific": {} 00:17:45.036 } 00:17:45.036 ] 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.036 "name": "Existed_Raid", 00:17:45.036 "uuid": "81515169-c68e-4267-a222-f977234cd0c5", 00:17:45.036 "strip_size_kb": 0, 00:17:45.036 "state": "online", 00:17:45.036 "raid_level": "raid1", 00:17:45.036 "superblock": true, 00:17:45.036 "num_base_bdevs": 2, 00:17:45.036 "num_base_bdevs_discovered": 2, 00:17:45.036 "num_base_bdevs_operational": 2, 00:17:45.036 "base_bdevs_list": [ 00:17:45.036 { 00:17:45.036 "name": "BaseBdev1", 00:17:45.036 "uuid": "945c7fb1-e4a1-4467-8576-7c1fb2a9cc08", 00:17:45.036 "is_configured": true, 00:17:45.036 "data_offset": 256, 00:17:45.036 "data_size": 7936 00:17:45.036 }, 00:17:45.036 { 00:17:45.036 "name": "BaseBdev2", 00:17:45.036 "uuid": "5864aed4-d891-433a-ba43-890f2109da63", 00:17:45.036 "is_configured": true, 00:17:45.036 "data_offset": 256, 00:17:45.036 "data_size": 7936 00:17:45.036 } 00:17:45.036 ] 00:17:45.036 }' 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.036 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.298 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:45.298 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:45.298 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:45.298 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:45.298 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:45.298 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:45.298 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:45.298 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:45.298 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.298 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.298 [2024-11-27 11:56:11.636582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:45.298 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.298 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:45.299 "name": "Existed_Raid", 00:17:45.299 "aliases": [ 00:17:45.299 "81515169-c68e-4267-a222-f977234cd0c5" 00:17:45.299 ], 00:17:45.299 "product_name": "Raid Volume", 00:17:45.299 "block_size": 4096, 00:17:45.299 "num_blocks": 7936, 00:17:45.299 "uuid": "81515169-c68e-4267-a222-f977234cd0c5", 00:17:45.299 "md_size": 32, 00:17:45.299 "md_interleave": false, 00:17:45.299 "dif_type": 0, 00:17:45.299 "assigned_rate_limits": { 00:17:45.299 "rw_ios_per_sec": 0, 00:17:45.299 "rw_mbytes_per_sec": 0, 00:17:45.299 "r_mbytes_per_sec": 0, 00:17:45.299 "w_mbytes_per_sec": 0 00:17:45.299 }, 00:17:45.299 "claimed": false, 00:17:45.299 "zoned": false, 00:17:45.299 "supported_io_types": { 00:17:45.299 "read": true, 00:17:45.299 "write": true, 00:17:45.299 "unmap": false, 00:17:45.299 "flush": false, 00:17:45.299 "reset": true, 00:17:45.299 "nvme_admin": false, 00:17:45.299 "nvme_io": false, 00:17:45.299 "nvme_io_md": false, 00:17:45.299 "write_zeroes": true, 00:17:45.299 "zcopy": false, 00:17:45.299 "get_zone_info": false, 00:17:45.299 "zone_management": false, 00:17:45.299 "zone_append": false, 00:17:45.299 "compare": false, 00:17:45.299 "compare_and_write": false, 00:17:45.299 "abort": false, 00:17:45.299 "seek_hole": false, 00:17:45.299 "seek_data": false, 00:17:45.299 "copy": false, 00:17:45.299 "nvme_iov_md": false 00:17:45.299 }, 00:17:45.299 "memory_domains": [ 00:17:45.299 { 00:17:45.299 "dma_device_id": "system", 00:17:45.299 "dma_device_type": 1 00:17:45.299 }, 00:17:45.299 { 00:17:45.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.299 "dma_device_type": 2 00:17:45.299 }, 00:17:45.299 { 00:17:45.299 "dma_device_id": "system", 00:17:45.299 "dma_device_type": 1 00:17:45.299 }, 00:17:45.299 { 00:17:45.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.299 "dma_device_type": 2 00:17:45.299 } 00:17:45.299 ], 00:17:45.299 "driver_specific": { 00:17:45.299 "raid": { 00:17:45.299 "uuid": "81515169-c68e-4267-a222-f977234cd0c5", 00:17:45.299 "strip_size_kb": 0, 00:17:45.299 "state": "online", 00:17:45.299 "raid_level": "raid1", 00:17:45.299 "superblock": true, 00:17:45.299 "num_base_bdevs": 2, 00:17:45.299 "num_base_bdevs_discovered": 2, 00:17:45.299 "num_base_bdevs_operational": 2, 00:17:45.299 "base_bdevs_list": [ 00:17:45.299 { 00:17:45.299 "name": "BaseBdev1", 00:17:45.299 "uuid": "945c7fb1-e4a1-4467-8576-7c1fb2a9cc08", 00:17:45.299 "is_configured": true, 00:17:45.299 "data_offset": 256, 00:17:45.299 "data_size": 7936 00:17:45.299 }, 00:17:45.299 { 00:17:45.299 "name": "BaseBdev2", 00:17:45.299 "uuid": "5864aed4-d891-433a-ba43-890f2109da63", 00:17:45.299 "is_configured": true, 00:17:45.299 "data_offset": 256, 00:17:45.299 "data_size": 7936 00:17:45.299 } 00:17:45.299 ] 00:17:45.299 } 00:17:45.299 } 00:17:45.299 }' 00:17:45.299 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:45.558 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:45.558 BaseBdev2' 00:17:45.558 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.558 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.559 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.559 [2024-11-27 11:56:11.844041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.818 11:56:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.818 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.818 "name": "Existed_Raid", 00:17:45.818 "uuid": "81515169-c68e-4267-a222-f977234cd0c5", 00:17:45.818 "strip_size_kb": 0, 00:17:45.818 "state": "online", 00:17:45.818 "raid_level": "raid1", 00:17:45.818 "superblock": true, 00:17:45.818 "num_base_bdevs": 2, 00:17:45.818 "num_base_bdevs_discovered": 1, 00:17:45.818 "num_base_bdevs_operational": 1, 00:17:45.819 "base_bdevs_list": [ 00:17:45.819 { 00:17:45.819 "name": null, 00:17:45.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.819 "is_configured": false, 00:17:45.819 "data_offset": 0, 00:17:45.819 "data_size": 7936 00:17:45.819 }, 00:17:45.819 { 00:17:45.819 "name": "BaseBdev2", 00:17:45.819 "uuid": "5864aed4-d891-433a-ba43-890f2109da63", 00:17:45.819 "is_configured": true, 00:17:45.819 "data_offset": 256, 00:17:45.819 "data_size": 7936 00:17:45.819 } 00:17:45.819 ] 00:17:45.819 }' 00:17:45.819 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.819 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.079 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:46.079 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:46.079 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.079 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:46.079 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.079 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.079 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.339 [2024-11-27 11:56:12.490322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:46.339 [2024-11-27 11:56:12.490434] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.339 [2024-11-27 11:56:12.594810] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.339 [2024-11-27 11:56:12.594872] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.339 [2024-11-27 11:56:12.594886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87265 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87265 ']' 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87265 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87265 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87265' 00:17:46.339 killing process with pid 87265 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87265 00:17:46.339 [2024-11-27 11:56:12.693083] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:46.339 11:56:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87265 00:17:46.339 [2024-11-27 11:56:12.710092] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:47.720 11:56:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:47.720 00:17:47.720 real 0m5.151s 00:17:47.720 user 0m7.364s 00:17:47.720 sys 0m0.895s 00:17:47.720 11:56:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.720 11:56:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.720 ************************************ 00:17:47.720 END TEST raid_state_function_test_sb_md_separate 00:17:47.720 ************************************ 00:17:47.720 11:56:13 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:47.720 11:56:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:47.720 11:56:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.720 11:56:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:47.720 ************************************ 00:17:47.720 START TEST raid_superblock_test_md_separate 00:17:47.720 ************************************ 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87512 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87512 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87512 ']' 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.720 11:56:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.720 [2024-11-27 11:56:14.017037] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:17:47.720 [2024-11-27 11:56:14.017259] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87512 ] 00:17:47.980 [2024-11-27 11:56:14.190811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.980 [2024-11-27 11:56:14.305731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.240 [2024-11-27 11:56:14.499674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.240 [2024-11-27 11:56:14.499719] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.500 11:56:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.500 11:56:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:48.500 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:48.500 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:48.500 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:48.500 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:48.500 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:48.500 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:48.500 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:48.500 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:48.500 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:48.500 11:56:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.500 11:56:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.760 malloc1 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.760 [2024-11-27 11:56:14.923122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:48.760 [2024-11-27 11:56:14.923246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.760 [2024-11-27 11:56:14.923304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:48.760 [2024-11-27 11:56:14.923341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.760 [2024-11-27 11:56:14.925344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.760 [2024-11-27 11:56:14.925416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:48.760 pt1 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.760 malloc2 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.760 [2024-11-27 11:56:14.983890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:48.760 [2024-11-27 11:56:14.984010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.760 [2024-11-27 11:56:14.984034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:48.760 [2024-11-27 11:56:14.984044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.760 [2024-11-27 11:56:14.985929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.760 [2024-11-27 11:56:14.985964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:48.760 pt2 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.760 [2024-11-27 11:56:14.995894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:48.760 [2024-11-27 11:56:14.997707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:48.760 [2024-11-27 11:56:14.997911] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:48.760 [2024-11-27 11:56:14.997926] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:48.760 [2024-11-27 11:56:14.997998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:48.760 [2024-11-27 11:56:14.998123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:48.760 [2024-11-27 11:56:14.998134] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:48.760 [2024-11-27 11:56:14.998242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.760 11:56:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.760 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:48.760 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.760 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.761 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.761 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.761 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.761 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.761 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.761 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.761 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.761 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.761 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.761 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.761 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.761 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.761 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.761 "name": "raid_bdev1", 00:17:48.761 "uuid": "2bfd4cc6-bf72-4c79-8e2c-baa33d21cf27", 00:17:48.761 "strip_size_kb": 0, 00:17:48.761 "state": "online", 00:17:48.761 "raid_level": "raid1", 00:17:48.761 "superblock": true, 00:17:48.761 "num_base_bdevs": 2, 00:17:48.761 "num_base_bdevs_discovered": 2, 00:17:48.761 "num_base_bdevs_operational": 2, 00:17:48.761 "base_bdevs_list": [ 00:17:48.761 { 00:17:48.761 "name": "pt1", 00:17:48.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.761 "is_configured": true, 00:17:48.761 "data_offset": 256, 00:17:48.761 "data_size": 7936 00:17:48.761 }, 00:17:48.761 { 00:17:48.761 "name": "pt2", 00:17:48.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.761 "is_configured": true, 00:17:48.761 "data_offset": 256, 00:17:48.761 "data_size": 7936 00:17:48.761 } 00:17:48.761 ] 00:17:48.761 }' 00:17:48.761 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.761 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.331 [2024-11-27 11:56:15.431463] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:49.331 "name": "raid_bdev1", 00:17:49.331 "aliases": [ 00:17:49.331 "2bfd4cc6-bf72-4c79-8e2c-baa33d21cf27" 00:17:49.331 ], 00:17:49.331 "product_name": "Raid Volume", 00:17:49.331 "block_size": 4096, 00:17:49.331 "num_blocks": 7936, 00:17:49.331 "uuid": "2bfd4cc6-bf72-4c79-8e2c-baa33d21cf27", 00:17:49.331 "md_size": 32, 00:17:49.331 "md_interleave": false, 00:17:49.331 "dif_type": 0, 00:17:49.331 "assigned_rate_limits": { 00:17:49.331 "rw_ios_per_sec": 0, 00:17:49.331 "rw_mbytes_per_sec": 0, 00:17:49.331 "r_mbytes_per_sec": 0, 00:17:49.331 "w_mbytes_per_sec": 0 00:17:49.331 }, 00:17:49.331 "claimed": false, 00:17:49.331 "zoned": false, 00:17:49.331 "supported_io_types": { 00:17:49.331 "read": true, 00:17:49.331 "write": true, 00:17:49.331 "unmap": false, 00:17:49.331 "flush": false, 00:17:49.331 "reset": true, 00:17:49.331 "nvme_admin": false, 00:17:49.331 "nvme_io": false, 00:17:49.331 "nvme_io_md": false, 00:17:49.331 "write_zeroes": true, 00:17:49.331 "zcopy": false, 00:17:49.331 "get_zone_info": false, 00:17:49.331 "zone_management": false, 00:17:49.331 "zone_append": false, 00:17:49.331 "compare": false, 00:17:49.331 "compare_and_write": false, 00:17:49.331 "abort": false, 00:17:49.331 "seek_hole": false, 00:17:49.331 "seek_data": false, 00:17:49.331 "copy": false, 00:17:49.331 "nvme_iov_md": false 00:17:49.331 }, 00:17:49.331 "memory_domains": [ 00:17:49.331 { 00:17:49.331 "dma_device_id": "system", 00:17:49.331 "dma_device_type": 1 00:17:49.331 }, 00:17:49.331 { 00:17:49.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.331 "dma_device_type": 2 00:17:49.331 }, 00:17:49.331 { 00:17:49.331 "dma_device_id": "system", 00:17:49.331 "dma_device_type": 1 00:17:49.331 }, 00:17:49.331 { 00:17:49.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.331 "dma_device_type": 2 00:17:49.331 } 00:17:49.331 ], 00:17:49.331 "driver_specific": { 00:17:49.331 "raid": { 00:17:49.331 "uuid": "2bfd4cc6-bf72-4c79-8e2c-baa33d21cf27", 00:17:49.331 "strip_size_kb": 0, 00:17:49.331 "state": "online", 00:17:49.331 "raid_level": "raid1", 00:17:49.331 "superblock": true, 00:17:49.331 "num_base_bdevs": 2, 00:17:49.331 "num_base_bdevs_discovered": 2, 00:17:49.331 "num_base_bdevs_operational": 2, 00:17:49.331 "base_bdevs_list": [ 00:17:49.331 { 00:17:49.331 "name": "pt1", 00:17:49.331 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.331 "is_configured": true, 00:17:49.331 "data_offset": 256, 00:17:49.331 "data_size": 7936 00:17:49.331 }, 00:17:49.331 { 00:17:49.331 "name": "pt2", 00:17:49.331 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.331 "is_configured": true, 00:17:49.331 "data_offset": 256, 00:17:49.331 "data_size": 7936 00:17:49.331 } 00:17:49.331 ] 00:17:49.331 } 00:17:49.331 } 00:17:49.331 }' 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:49.331 pt2' 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:49.331 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:49.332 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:49.332 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.332 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.332 [2024-11-27 11:56:15.655006] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.332 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.332 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2bfd4cc6-bf72-4c79-8e2c-baa33d21cf27 00:17:49.332 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 2bfd4cc6-bf72-4c79-8e2c-baa33d21cf27 ']' 00:17:49.332 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:49.332 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.332 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.332 [2024-11-27 11:56:15.698648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:49.332 [2024-11-27 11:56:15.698715] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:49.332 [2024-11-27 11:56:15.698828] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.332 [2024-11-27 11:56:15.698922] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.332 [2024-11-27 11:56:15.698989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:49.332 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.332 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:49.332 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.332 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.332 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.592 [2024-11-27 11:56:15.814503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:49.592 [2024-11-27 11:56:15.816504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:49.592 [2024-11-27 11:56:15.816643] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:49.592 [2024-11-27 11:56:15.816749] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:49.592 [2024-11-27 11:56:15.816819] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:49.592 [2024-11-27 11:56:15.816867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:49.592 request: 00:17:49.592 { 00:17:49.592 "name": "raid_bdev1", 00:17:49.592 "raid_level": "raid1", 00:17:49.592 "base_bdevs": [ 00:17:49.592 "malloc1", 00:17:49.592 "malloc2" 00:17:49.592 ], 00:17:49.592 "superblock": false, 00:17:49.592 "method": "bdev_raid_create", 00:17:49.592 "req_id": 1 00:17:49.592 } 00:17:49.592 Got JSON-RPC error response 00:17:49.592 response: 00:17:49.592 { 00:17:49.592 "code": -17, 00:17:49.592 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:49.592 } 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.592 [2024-11-27 11:56:15.878337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:49.592 [2024-11-27 11:56:15.878429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.592 [2024-11-27 11:56:15.878450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:49.592 [2024-11-27 11:56:15.878461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.592 [2024-11-27 11:56:15.880462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.592 [2024-11-27 11:56:15.880503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:49.592 [2024-11-27 11:56:15.880556] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:49.592 [2024-11-27 11:56:15.880612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:49.592 pt1 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.592 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.592 "name": "raid_bdev1", 00:17:49.592 "uuid": "2bfd4cc6-bf72-4c79-8e2c-baa33d21cf27", 00:17:49.592 "strip_size_kb": 0, 00:17:49.592 "state": "configuring", 00:17:49.592 "raid_level": "raid1", 00:17:49.592 "superblock": true, 00:17:49.592 "num_base_bdevs": 2, 00:17:49.592 "num_base_bdevs_discovered": 1, 00:17:49.593 "num_base_bdevs_operational": 2, 00:17:49.593 "base_bdevs_list": [ 00:17:49.593 { 00:17:49.593 "name": "pt1", 00:17:49.593 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.593 "is_configured": true, 00:17:49.593 "data_offset": 256, 00:17:49.593 "data_size": 7936 00:17:49.593 }, 00:17:49.593 { 00:17:49.593 "name": null, 00:17:49.593 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.593 "is_configured": false, 00:17:49.593 "data_offset": 256, 00:17:49.593 "data_size": 7936 00:17:49.593 } 00:17:49.593 ] 00:17:49.593 }' 00:17:49.593 11:56:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.593 11:56:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.162 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:50.162 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:50.162 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:50.162 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:50.162 11:56:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.162 11:56:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.162 [2024-11-27 11:56:16.337591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:50.162 [2024-11-27 11:56:16.337736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.162 [2024-11-27 11:56:16.337810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:50.162 [2024-11-27 11:56:16.337861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.162 [2024-11-27 11:56:16.338164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.162 [2024-11-27 11:56:16.338227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:50.162 [2024-11-27 11:56:16.338316] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:50.162 [2024-11-27 11:56:16.338370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:50.162 [2024-11-27 11:56:16.338533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:50.162 [2024-11-27 11:56:16.338576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:50.162 [2024-11-27 11:56:16.338694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:50.162 [2024-11-27 11:56:16.338880] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:50.162 [2024-11-27 11:56:16.338919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:50.162 [2024-11-27 11:56:16.339066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.162 pt2 00:17:50.162 11:56:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.162 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:50.162 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:50.162 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:50.163 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.163 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.163 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.163 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.163 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.163 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.163 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.163 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.163 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.163 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.163 11:56:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.163 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.163 11:56:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.163 11:56:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.163 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.163 "name": "raid_bdev1", 00:17:50.163 "uuid": "2bfd4cc6-bf72-4c79-8e2c-baa33d21cf27", 00:17:50.163 "strip_size_kb": 0, 00:17:50.163 "state": "online", 00:17:50.163 "raid_level": "raid1", 00:17:50.163 "superblock": true, 00:17:50.163 "num_base_bdevs": 2, 00:17:50.163 "num_base_bdevs_discovered": 2, 00:17:50.163 "num_base_bdevs_operational": 2, 00:17:50.163 "base_bdevs_list": [ 00:17:50.163 { 00:17:50.163 "name": "pt1", 00:17:50.163 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.163 "is_configured": true, 00:17:50.163 "data_offset": 256, 00:17:50.163 "data_size": 7936 00:17:50.163 }, 00:17:50.163 { 00:17:50.163 "name": "pt2", 00:17:50.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.163 "is_configured": true, 00:17:50.163 "data_offset": 256, 00:17:50.163 "data_size": 7936 00:17:50.163 } 00:17:50.163 ] 00:17:50.163 }' 00:17:50.163 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.163 11:56:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.422 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:50.422 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:50.422 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:50.422 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:50.422 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:50.422 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:50.682 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:50.682 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:50.682 11:56:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.682 11:56:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.682 [2024-11-27 11:56:16.813097] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.682 11:56:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.682 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:50.682 "name": "raid_bdev1", 00:17:50.682 "aliases": [ 00:17:50.682 "2bfd4cc6-bf72-4c79-8e2c-baa33d21cf27" 00:17:50.682 ], 00:17:50.682 "product_name": "Raid Volume", 00:17:50.682 "block_size": 4096, 00:17:50.682 "num_blocks": 7936, 00:17:50.682 "uuid": "2bfd4cc6-bf72-4c79-8e2c-baa33d21cf27", 00:17:50.682 "md_size": 32, 00:17:50.682 "md_interleave": false, 00:17:50.682 "dif_type": 0, 00:17:50.682 "assigned_rate_limits": { 00:17:50.682 "rw_ios_per_sec": 0, 00:17:50.682 "rw_mbytes_per_sec": 0, 00:17:50.682 "r_mbytes_per_sec": 0, 00:17:50.682 "w_mbytes_per_sec": 0 00:17:50.682 }, 00:17:50.682 "claimed": false, 00:17:50.682 "zoned": false, 00:17:50.682 "supported_io_types": { 00:17:50.682 "read": true, 00:17:50.682 "write": true, 00:17:50.682 "unmap": false, 00:17:50.682 "flush": false, 00:17:50.682 "reset": true, 00:17:50.682 "nvme_admin": false, 00:17:50.682 "nvme_io": false, 00:17:50.682 "nvme_io_md": false, 00:17:50.682 "write_zeroes": true, 00:17:50.682 "zcopy": false, 00:17:50.682 "get_zone_info": false, 00:17:50.682 "zone_management": false, 00:17:50.682 "zone_append": false, 00:17:50.682 "compare": false, 00:17:50.682 "compare_and_write": false, 00:17:50.682 "abort": false, 00:17:50.682 "seek_hole": false, 00:17:50.682 "seek_data": false, 00:17:50.682 "copy": false, 00:17:50.682 "nvme_iov_md": false 00:17:50.682 }, 00:17:50.682 "memory_domains": [ 00:17:50.682 { 00:17:50.682 "dma_device_id": "system", 00:17:50.682 "dma_device_type": 1 00:17:50.682 }, 00:17:50.682 { 00:17:50.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.682 "dma_device_type": 2 00:17:50.682 }, 00:17:50.682 { 00:17:50.682 "dma_device_id": "system", 00:17:50.682 "dma_device_type": 1 00:17:50.682 }, 00:17:50.682 { 00:17:50.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.682 "dma_device_type": 2 00:17:50.682 } 00:17:50.682 ], 00:17:50.682 "driver_specific": { 00:17:50.682 "raid": { 00:17:50.682 "uuid": "2bfd4cc6-bf72-4c79-8e2c-baa33d21cf27", 00:17:50.682 "strip_size_kb": 0, 00:17:50.682 "state": "online", 00:17:50.682 "raid_level": "raid1", 00:17:50.682 "superblock": true, 00:17:50.682 "num_base_bdevs": 2, 00:17:50.682 "num_base_bdevs_discovered": 2, 00:17:50.682 "num_base_bdevs_operational": 2, 00:17:50.682 "base_bdevs_list": [ 00:17:50.682 { 00:17:50.682 "name": "pt1", 00:17:50.682 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.682 "is_configured": true, 00:17:50.682 "data_offset": 256, 00:17:50.682 "data_size": 7936 00:17:50.682 }, 00:17:50.682 { 00:17:50.682 "name": "pt2", 00:17:50.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.682 "is_configured": true, 00:17:50.682 "data_offset": 256, 00:17:50.682 "data_size": 7936 00:17:50.682 } 00:17:50.682 ] 00:17:50.682 } 00:17:50.682 } 00:17:50.682 }' 00:17:50.682 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:50.682 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:50.682 pt2' 00:17:50.682 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.682 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:50.682 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.682 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.682 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:50.682 11:56:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.682 11:56:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.682 11:56:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.683 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:50.683 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:50.683 11:56:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.683 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.683 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:50.683 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.683 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.683 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.683 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:50.683 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:50.683 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:50.683 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.683 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.683 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:50.683 [2024-11-27 11:56:17.040685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.683 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 2bfd4cc6-bf72-4c79-8e2c-baa33d21cf27 '!=' 2bfd4cc6-bf72-4c79-8e2c-baa33d21cf27 ']' 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.942 [2024-11-27 11:56:17.092370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.942 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.943 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.943 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.943 "name": "raid_bdev1", 00:17:50.943 "uuid": "2bfd4cc6-bf72-4c79-8e2c-baa33d21cf27", 00:17:50.943 "strip_size_kb": 0, 00:17:50.943 "state": "online", 00:17:50.943 "raid_level": "raid1", 00:17:50.943 "superblock": true, 00:17:50.943 "num_base_bdevs": 2, 00:17:50.943 "num_base_bdevs_discovered": 1, 00:17:50.943 "num_base_bdevs_operational": 1, 00:17:50.943 "base_bdevs_list": [ 00:17:50.943 { 00:17:50.943 "name": null, 00:17:50.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.943 "is_configured": false, 00:17:50.943 "data_offset": 0, 00:17:50.943 "data_size": 7936 00:17:50.943 }, 00:17:50.943 { 00:17:50.943 "name": "pt2", 00:17:50.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.943 "is_configured": true, 00:17:50.943 "data_offset": 256, 00:17:50.943 "data_size": 7936 00:17:50.943 } 00:17:50.943 ] 00:17:50.943 }' 00:17:50.943 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.943 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.217 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:51.217 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.217 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.217 [2024-11-27 11:56:17.527591] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.217 [2024-11-27 11:56:17.527622] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:51.217 [2024-11-27 11:56:17.527720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.217 [2024-11-27 11:56:17.527773] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.217 [2024-11-27 11:56:17.527786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:51.217 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.217 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.217 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:51.217 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.217 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.217 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.217 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:51.217 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:51.217 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:51.217 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:51.217 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:51.217 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.217 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.477 [2024-11-27 11:56:17.607454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:51.477 [2024-11-27 11:56:17.607589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.477 [2024-11-27 11:56:17.607635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:51.477 [2024-11-27 11:56:17.607669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.477 [2024-11-27 11:56:17.609941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.477 [2024-11-27 11:56:17.609984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:51.477 [2024-11-27 11:56:17.610048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:51.477 [2024-11-27 11:56:17.610102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:51.477 [2024-11-27 11:56:17.610213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:51.477 [2024-11-27 11:56:17.610226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:51.477 [2024-11-27 11:56:17.610312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:51.477 [2024-11-27 11:56:17.610420] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:51.477 [2024-11-27 11:56:17.610428] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:51.477 [2024-11-27 11:56:17.610531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.477 pt2 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.477 "name": "raid_bdev1", 00:17:51.477 "uuid": "2bfd4cc6-bf72-4c79-8e2c-baa33d21cf27", 00:17:51.477 "strip_size_kb": 0, 00:17:51.477 "state": "online", 00:17:51.477 "raid_level": "raid1", 00:17:51.477 "superblock": true, 00:17:51.477 "num_base_bdevs": 2, 00:17:51.477 "num_base_bdevs_discovered": 1, 00:17:51.477 "num_base_bdevs_operational": 1, 00:17:51.477 "base_bdevs_list": [ 00:17:51.477 { 00:17:51.477 "name": null, 00:17:51.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.477 "is_configured": false, 00:17:51.477 "data_offset": 256, 00:17:51.477 "data_size": 7936 00:17:51.477 }, 00:17:51.477 { 00:17:51.477 "name": "pt2", 00:17:51.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.477 "is_configured": true, 00:17:51.477 "data_offset": 256, 00:17:51.477 "data_size": 7936 00:17:51.477 } 00:17:51.477 ] 00:17:51.477 }' 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.477 11:56:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.737 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:51.737 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.737 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.737 [2024-11-27 11:56:18.098570] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.737 [2024-11-27 11:56:18.098604] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:51.737 [2024-11-27 11:56:18.098685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.737 [2024-11-27 11:56:18.098738] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.737 [2024-11-27 11:56:18.098748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:51.737 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.737 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.737 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.737 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.737 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:51.737 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.997 [2024-11-27 11:56:18.162480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:51.997 [2024-11-27 11:56:18.162582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.997 [2024-11-27 11:56:18.162637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:51.997 [2024-11-27 11:56:18.162673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.997 [2024-11-27 11:56:18.164908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.997 [2024-11-27 11:56:18.164992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:51.997 [2024-11-27 11:56:18.165091] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:51.997 [2024-11-27 11:56:18.165186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:51.997 [2024-11-27 11:56:18.165369] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:51.997 [2024-11-27 11:56:18.165432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.997 [2024-11-27 11:56:18.165481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:51.997 [2024-11-27 11:56:18.165600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:51.997 [2024-11-27 11:56:18.165725] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:51.997 [2024-11-27 11:56:18.165760] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:51.997 [2024-11-27 11:56:18.165851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:51.997 [2024-11-27 11:56:18.165996] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:51.997 [2024-11-27 11:56:18.166033] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:51.997 [2024-11-27 11:56:18.166190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.997 pt1 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.997 "name": "raid_bdev1", 00:17:51.997 "uuid": "2bfd4cc6-bf72-4c79-8e2c-baa33d21cf27", 00:17:51.997 "strip_size_kb": 0, 00:17:51.997 "state": "online", 00:17:51.997 "raid_level": "raid1", 00:17:51.997 "superblock": true, 00:17:51.997 "num_base_bdevs": 2, 00:17:51.997 "num_base_bdevs_discovered": 1, 00:17:51.997 "num_base_bdevs_operational": 1, 00:17:51.997 "base_bdevs_list": [ 00:17:51.997 { 00:17:51.997 "name": null, 00:17:51.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.997 "is_configured": false, 00:17:51.997 "data_offset": 256, 00:17:51.997 "data_size": 7936 00:17:51.997 }, 00:17:51.997 { 00:17:51.997 "name": "pt2", 00:17:51.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.997 "is_configured": true, 00:17:51.997 "data_offset": 256, 00:17:51.997 "data_size": 7936 00:17:51.997 } 00:17:51.997 ] 00:17:51.997 }' 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.997 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.256 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:52.256 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.256 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:52.256 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.256 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.256 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:52.256 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.256 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.256 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:52.256 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.256 [2024-11-27 11:56:18.637985] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.515 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.515 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 2bfd4cc6-bf72-4c79-8e2c-baa33d21cf27 '!=' 2bfd4cc6-bf72-4c79-8e2c-baa33d21cf27 ']' 00:17:52.515 11:56:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87512 00:17:52.515 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87512 ']' 00:17:52.515 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87512 00:17:52.515 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:52.515 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.515 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87512 00:17:52.515 killing process with pid 87512 00:17:52.515 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.515 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.515 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87512' 00:17:52.515 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87512 00:17:52.515 [2024-11-27 11:56:18.717296] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:52.515 [2024-11-27 11:56:18.717402] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.515 11:56:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87512 00:17:52.515 [2024-11-27 11:56:18.717468] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.515 [2024-11-27 11:56:18.717489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:52.774 [2024-11-27 11:56:18.943634] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:54.152 11:56:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:54.152 00:17:54.152 real 0m6.171s 00:17:54.152 user 0m9.345s 00:17:54.152 sys 0m1.084s 00:17:54.152 ************************************ 00:17:54.152 END TEST raid_superblock_test_md_separate 00:17:54.152 ************************************ 00:17:54.152 11:56:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.152 11:56:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.152 11:56:20 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:54.152 11:56:20 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:54.152 11:56:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:54.152 11:56:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.152 11:56:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.152 ************************************ 00:17:54.152 START TEST raid_rebuild_test_sb_md_separate 00:17:54.152 ************************************ 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87840 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87840 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87840 ']' 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.152 11:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.152 [2024-11-27 11:56:20.267116] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:17:54.152 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:54.152 Zero copy mechanism will not be used. 00:17:54.152 [2024-11-27 11:56:20.267300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87840 ] 00:17:54.152 [2024-11-27 11:56:20.423061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.411 [2024-11-27 11:56:20.539529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.411 [2024-11-27 11:56:20.744276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.411 [2024-11-27 11:56:20.744416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.980 BaseBdev1_malloc 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.980 [2024-11-27 11:56:21.157265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:54.980 [2024-11-27 11:56:21.157329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.980 [2024-11-27 11:56:21.157366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:54.980 [2024-11-27 11:56:21.157377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.980 [2024-11-27 11:56:21.159197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.980 [2024-11-27 11:56:21.159235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:54.980 BaseBdev1 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.980 BaseBdev2_malloc 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.980 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.980 [2024-11-27 11:56:21.213173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:54.981 [2024-11-27 11:56:21.213236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.981 [2024-11-27 11:56:21.213256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:54.981 [2024-11-27 11:56:21.213268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.981 [2024-11-27 11:56:21.215108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.981 [2024-11-27 11:56:21.215144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:54.981 BaseBdev2 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.981 spare_malloc 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.981 spare_delay 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.981 [2024-11-27 11:56:21.292022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:54.981 [2024-11-27 11:56:21.292087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.981 [2024-11-27 11:56:21.292112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:54.981 [2024-11-27 11:56:21.292124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.981 [2024-11-27 11:56:21.294183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.981 [2024-11-27 11:56:21.294223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:54.981 spare 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.981 [2024-11-27 11:56:21.304035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.981 [2024-11-27 11:56:21.305927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:54.981 [2024-11-27 11:56:21.306114] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:54.981 [2024-11-27 11:56:21.306129] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:54.981 [2024-11-27 11:56:21.306211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:54.981 [2024-11-27 11:56:21.306337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:54.981 [2024-11-27 11:56:21.306346] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:54.981 [2024-11-27 11:56:21.306442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.981 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.242 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.242 "name": "raid_bdev1", 00:17:55.242 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:17:55.242 "strip_size_kb": 0, 00:17:55.242 "state": "online", 00:17:55.242 "raid_level": "raid1", 00:17:55.242 "superblock": true, 00:17:55.242 "num_base_bdevs": 2, 00:17:55.242 "num_base_bdevs_discovered": 2, 00:17:55.242 "num_base_bdevs_operational": 2, 00:17:55.242 "base_bdevs_list": [ 00:17:55.242 { 00:17:55.242 "name": "BaseBdev1", 00:17:55.242 "uuid": "7df93c97-c52f-585a-828a-03ab9daccb8e", 00:17:55.242 "is_configured": true, 00:17:55.242 "data_offset": 256, 00:17:55.242 "data_size": 7936 00:17:55.242 }, 00:17:55.242 { 00:17:55.242 "name": "BaseBdev2", 00:17:55.242 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:17:55.242 "is_configured": true, 00:17:55.242 "data_offset": 256, 00:17:55.242 "data_size": 7936 00:17:55.242 } 00:17:55.242 ] 00:17:55.242 }' 00:17:55.242 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.242 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.502 [2024-11-27 11:56:21.739608] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:55.502 11:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:55.762 [2024-11-27 11:56:22.014896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:55.762 /dev/nbd0 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:55.762 1+0 records in 00:17:55.762 1+0 records out 00:17:55.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300825 s, 13.6 MB/s 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:55.762 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:56.701 7936+0 records in 00:17:56.701 7936+0 records out 00:17:56.701 32505856 bytes (33 MB, 31 MiB) copied, 0.674477 s, 48.2 MB/s 00:17:56.701 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:56.701 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:56.701 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:56.701 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:56.701 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:56.701 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:56.701 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:56.701 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:56.701 [2024-11-27 11:56:22.989100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.701 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:56.701 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:56.701 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:56.701 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:56.701 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:56.701 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:56.702 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:56.702 11:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.702 [2024-11-27 11:56:23.009191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.702 "name": "raid_bdev1", 00:17:56.702 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:17:56.702 "strip_size_kb": 0, 00:17:56.702 "state": "online", 00:17:56.702 "raid_level": "raid1", 00:17:56.702 "superblock": true, 00:17:56.702 "num_base_bdevs": 2, 00:17:56.702 "num_base_bdevs_discovered": 1, 00:17:56.702 "num_base_bdevs_operational": 1, 00:17:56.702 "base_bdevs_list": [ 00:17:56.702 { 00:17:56.702 "name": null, 00:17:56.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.702 "is_configured": false, 00:17:56.702 "data_offset": 0, 00:17:56.702 "data_size": 7936 00:17:56.702 }, 00:17:56.702 { 00:17:56.702 "name": "BaseBdev2", 00:17:56.702 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:17:56.702 "is_configured": true, 00:17:56.702 "data_offset": 256, 00:17:56.702 "data_size": 7936 00:17:56.702 } 00:17:56.702 ] 00:17:56.702 }' 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.702 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.273 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:57.273 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.273 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.273 [2024-11-27 11:56:23.468423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:57.273 [2024-11-27 11:56:23.482130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:57.273 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.273 11:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:57.273 [2024-11-27 11:56:23.484035] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:58.219 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.219 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.219 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.219 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.219 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.219 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.219 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.219 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.219 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.219 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.219 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.219 "name": "raid_bdev1", 00:17:58.219 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:17:58.219 "strip_size_kb": 0, 00:17:58.219 "state": "online", 00:17:58.219 "raid_level": "raid1", 00:17:58.219 "superblock": true, 00:17:58.219 "num_base_bdevs": 2, 00:17:58.219 "num_base_bdevs_discovered": 2, 00:17:58.219 "num_base_bdevs_operational": 2, 00:17:58.219 "process": { 00:17:58.219 "type": "rebuild", 00:17:58.219 "target": "spare", 00:17:58.219 "progress": { 00:17:58.219 "blocks": 2560, 00:17:58.219 "percent": 32 00:17:58.219 } 00:17:58.219 }, 00:17:58.219 "base_bdevs_list": [ 00:17:58.219 { 00:17:58.219 "name": "spare", 00:17:58.219 "uuid": "46db7b7a-5e24-5835-8f2d-b63d24dee815", 00:17:58.219 "is_configured": true, 00:17:58.219 "data_offset": 256, 00:17:58.219 "data_size": 7936 00:17:58.219 }, 00:17:58.219 { 00:17:58.219 "name": "BaseBdev2", 00:17:58.219 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:17:58.219 "is_configured": true, 00:17:58.219 "data_offset": 256, 00:17:58.219 "data_size": 7936 00:17:58.219 } 00:17:58.219 ] 00:17:58.219 }' 00:17:58.219 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.219 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.219 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.479 [2024-11-27 11:56:24.652268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.479 [2024-11-27 11:56:24.690197] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:58.479 [2024-11-27 11:56:24.690293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.479 [2024-11-27 11:56:24.690308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.479 [2024-11-27 11:56:24.690322] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.479 "name": "raid_bdev1", 00:17:58.479 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:17:58.479 "strip_size_kb": 0, 00:17:58.479 "state": "online", 00:17:58.479 "raid_level": "raid1", 00:17:58.479 "superblock": true, 00:17:58.479 "num_base_bdevs": 2, 00:17:58.479 "num_base_bdevs_discovered": 1, 00:17:58.479 "num_base_bdevs_operational": 1, 00:17:58.479 "base_bdevs_list": [ 00:17:58.479 { 00:17:58.479 "name": null, 00:17:58.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.479 "is_configured": false, 00:17:58.479 "data_offset": 0, 00:17:58.479 "data_size": 7936 00:17:58.479 }, 00:17:58.479 { 00:17:58.479 "name": "BaseBdev2", 00:17:58.479 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:17:58.479 "is_configured": true, 00:17:58.479 "data_offset": 256, 00:17:58.479 "data_size": 7936 00:17:58.479 } 00:17:58.479 ] 00:17:58.479 }' 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.479 11:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.047 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:59.047 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.047 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:59.047 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:59.047 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.047 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.047 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.047 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.047 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.047 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.047 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.047 "name": "raid_bdev1", 00:17:59.047 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:17:59.047 "strip_size_kb": 0, 00:17:59.047 "state": "online", 00:17:59.048 "raid_level": "raid1", 00:17:59.048 "superblock": true, 00:17:59.048 "num_base_bdevs": 2, 00:17:59.048 "num_base_bdevs_discovered": 1, 00:17:59.048 "num_base_bdevs_operational": 1, 00:17:59.048 "base_bdevs_list": [ 00:17:59.048 { 00:17:59.048 "name": null, 00:17:59.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.048 "is_configured": false, 00:17:59.048 "data_offset": 0, 00:17:59.048 "data_size": 7936 00:17:59.048 }, 00:17:59.048 { 00:17:59.048 "name": "BaseBdev2", 00:17:59.048 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:17:59.048 "is_configured": true, 00:17:59.048 "data_offset": 256, 00:17:59.048 "data_size": 7936 00:17:59.048 } 00:17:59.048 ] 00:17:59.048 }' 00:17:59.048 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.048 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:59.048 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.048 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:59.048 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:59.048 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.048 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.048 [2024-11-27 11:56:25.329684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:59.048 [2024-11-27 11:56:25.343888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:59.048 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.048 11:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:59.048 [2024-11-27 11:56:25.345803] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:59.983 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.983 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.983 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.983 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.983 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.983 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.983 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.983 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.983 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.242 "name": "raid_bdev1", 00:18:00.242 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:18:00.242 "strip_size_kb": 0, 00:18:00.242 "state": "online", 00:18:00.242 "raid_level": "raid1", 00:18:00.242 "superblock": true, 00:18:00.242 "num_base_bdevs": 2, 00:18:00.242 "num_base_bdevs_discovered": 2, 00:18:00.242 "num_base_bdevs_operational": 2, 00:18:00.242 "process": { 00:18:00.242 "type": "rebuild", 00:18:00.242 "target": "spare", 00:18:00.242 "progress": { 00:18:00.242 "blocks": 2560, 00:18:00.242 "percent": 32 00:18:00.242 } 00:18:00.242 }, 00:18:00.242 "base_bdevs_list": [ 00:18:00.242 { 00:18:00.242 "name": "spare", 00:18:00.242 "uuid": "46db7b7a-5e24-5835-8f2d-b63d24dee815", 00:18:00.242 "is_configured": true, 00:18:00.242 "data_offset": 256, 00:18:00.242 "data_size": 7936 00:18:00.242 }, 00:18:00.242 { 00:18:00.242 "name": "BaseBdev2", 00:18:00.242 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:18:00.242 "is_configured": true, 00:18:00.242 "data_offset": 256, 00:18:00.242 "data_size": 7936 00:18:00.242 } 00:18:00.242 ] 00:18:00.242 }' 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:00.242 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=721 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.242 "name": "raid_bdev1", 00:18:00.242 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:18:00.242 "strip_size_kb": 0, 00:18:00.242 "state": "online", 00:18:00.242 "raid_level": "raid1", 00:18:00.242 "superblock": true, 00:18:00.242 "num_base_bdevs": 2, 00:18:00.242 "num_base_bdevs_discovered": 2, 00:18:00.242 "num_base_bdevs_operational": 2, 00:18:00.242 "process": { 00:18:00.242 "type": "rebuild", 00:18:00.242 "target": "spare", 00:18:00.242 "progress": { 00:18:00.242 "blocks": 2816, 00:18:00.242 "percent": 35 00:18:00.242 } 00:18:00.242 }, 00:18:00.242 "base_bdevs_list": [ 00:18:00.242 { 00:18:00.242 "name": "spare", 00:18:00.242 "uuid": "46db7b7a-5e24-5835-8f2d-b63d24dee815", 00:18:00.242 "is_configured": true, 00:18:00.242 "data_offset": 256, 00:18:00.242 "data_size": 7936 00:18:00.242 }, 00:18:00.242 { 00:18:00.242 "name": "BaseBdev2", 00:18:00.242 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:18:00.242 "is_configured": true, 00:18:00.242 "data_offset": 256, 00:18:00.242 "data_size": 7936 00:18:00.242 } 00:18:00.242 ] 00:18:00.242 }' 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.242 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.500 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.500 11:56:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:01.436 11:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:01.436 11:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.436 11:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.436 11:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.436 11:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.436 11:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.436 11:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.436 11:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.436 11:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.436 11:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.436 11:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.436 11:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.436 "name": "raid_bdev1", 00:18:01.436 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:18:01.436 "strip_size_kb": 0, 00:18:01.436 "state": "online", 00:18:01.436 "raid_level": "raid1", 00:18:01.436 "superblock": true, 00:18:01.436 "num_base_bdevs": 2, 00:18:01.436 "num_base_bdevs_discovered": 2, 00:18:01.436 "num_base_bdevs_operational": 2, 00:18:01.436 "process": { 00:18:01.436 "type": "rebuild", 00:18:01.436 "target": "spare", 00:18:01.436 "progress": { 00:18:01.436 "blocks": 5888, 00:18:01.436 "percent": 74 00:18:01.436 } 00:18:01.436 }, 00:18:01.436 "base_bdevs_list": [ 00:18:01.436 { 00:18:01.436 "name": "spare", 00:18:01.436 "uuid": "46db7b7a-5e24-5835-8f2d-b63d24dee815", 00:18:01.436 "is_configured": true, 00:18:01.436 "data_offset": 256, 00:18:01.436 "data_size": 7936 00:18:01.436 }, 00:18:01.436 { 00:18:01.436 "name": "BaseBdev2", 00:18:01.436 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:18:01.436 "is_configured": true, 00:18:01.436 "data_offset": 256, 00:18:01.436 "data_size": 7936 00:18:01.436 } 00:18:01.436 ] 00:18:01.436 }' 00:18:01.436 11:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.436 11:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.436 11:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.436 11:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.436 11:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:02.373 [2024-11-27 11:56:28.461606] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:02.373 [2024-11-27 11:56:28.461700] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:02.373 [2024-11-27 11:56:28.461854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.633 "name": "raid_bdev1", 00:18:02.633 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:18:02.633 "strip_size_kb": 0, 00:18:02.633 "state": "online", 00:18:02.633 "raid_level": "raid1", 00:18:02.633 "superblock": true, 00:18:02.633 "num_base_bdevs": 2, 00:18:02.633 "num_base_bdevs_discovered": 2, 00:18:02.633 "num_base_bdevs_operational": 2, 00:18:02.633 "base_bdevs_list": [ 00:18:02.633 { 00:18:02.633 "name": "spare", 00:18:02.633 "uuid": "46db7b7a-5e24-5835-8f2d-b63d24dee815", 00:18:02.633 "is_configured": true, 00:18:02.633 "data_offset": 256, 00:18:02.633 "data_size": 7936 00:18:02.633 }, 00:18:02.633 { 00:18:02.633 "name": "BaseBdev2", 00:18:02.633 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:18:02.633 "is_configured": true, 00:18:02.633 "data_offset": 256, 00:18:02.633 "data_size": 7936 00:18:02.633 } 00:18:02.633 ] 00:18:02.633 }' 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.633 11:56:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.633 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.633 "name": "raid_bdev1", 00:18:02.633 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:18:02.633 "strip_size_kb": 0, 00:18:02.633 "state": "online", 00:18:02.633 "raid_level": "raid1", 00:18:02.633 "superblock": true, 00:18:02.633 "num_base_bdevs": 2, 00:18:02.633 "num_base_bdevs_discovered": 2, 00:18:02.633 "num_base_bdevs_operational": 2, 00:18:02.633 "base_bdevs_list": [ 00:18:02.633 { 00:18:02.633 "name": "spare", 00:18:02.633 "uuid": "46db7b7a-5e24-5835-8f2d-b63d24dee815", 00:18:02.633 "is_configured": true, 00:18:02.633 "data_offset": 256, 00:18:02.633 "data_size": 7936 00:18:02.633 }, 00:18:02.633 { 00:18:02.633 "name": "BaseBdev2", 00:18:02.633 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:18:02.633 "is_configured": true, 00:18:02.634 "data_offset": 256, 00:18:02.634 "data_size": 7936 00:18:02.634 } 00:18:02.634 ] 00:18:02.634 }' 00:18:02.634 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.893 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:02.893 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.893 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:02.893 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:02.893 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.893 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.893 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.893 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.893 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.893 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.893 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.893 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.894 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.894 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.894 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.894 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.894 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.894 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.894 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.894 "name": "raid_bdev1", 00:18:02.894 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:18:02.894 "strip_size_kb": 0, 00:18:02.894 "state": "online", 00:18:02.894 "raid_level": "raid1", 00:18:02.894 "superblock": true, 00:18:02.894 "num_base_bdevs": 2, 00:18:02.894 "num_base_bdevs_discovered": 2, 00:18:02.894 "num_base_bdevs_operational": 2, 00:18:02.894 "base_bdevs_list": [ 00:18:02.894 { 00:18:02.894 "name": "spare", 00:18:02.894 "uuid": "46db7b7a-5e24-5835-8f2d-b63d24dee815", 00:18:02.894 "is_configured": true, 00:18:02.894 "data_offset": 256, 00:18:02.894 "data_size": 7936 00:18:02.894 }, 00:18:02.894 { 00:18:02.894 "name": "BaseBdev2", 00:18:02.894 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:18:02.894 "is_configured": true, 00:18:02.894 "data_offset": 256, 00:18:02.894 "data_size": 7936 00:18:02.894 } 00:18:02.894 ] 00:18:02.894 }' 00:18:02.894 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.894 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.153 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:03.153 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.153 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.153 [2024-11-27 11:56:29.513220] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.153 [2024-11-27 11:56:29.513333] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.153 [2024-11-27 11:56:29.513478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.153 [2024-11-27 11:56:29.513597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.153 [2024-11-27 11:56:29.513655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:03.153 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.153 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.153 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.153 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:03.153 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.153 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.411 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:03.411 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:03.411 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:03.411 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:03.411 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:03.411 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:03.411 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:03.412 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:03.412 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:03.412 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:03.412 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:03.412 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:03.412 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:03.412 /dev/nbd0 00:18:03.674 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:03.675 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:03.675 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:03.675 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:03.675 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:03.675 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:03.675 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:03.675 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:03.675 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:03.675 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:03.675 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:03.675 1+0 records in 00:18:03.675 1+0 records out 00:18:03.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530192 s, 7.7 MB/s 00:18:03.675 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.675 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:03.675 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.675 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:03.675 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:03.675 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:03.675 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:03.675 11:56:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:03.675 /dev/nbd1 00:18:03.675 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:03.939 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:03.939 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:03.939 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:03.939 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:03.939 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:03.940 1+0 records in 00:18:03.940 1+0 records out 00:18:03.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565676 s, 7.2 MB/s 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:03.940 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:04.199 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:04.199 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:04.199 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:04.199 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:04.199 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:04.199 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:04.199 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:04.199 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:04.199 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:04.199 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:04.458 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.459 [2024-11-27 11:56:30.734335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:04.459 [2024-11-27 11:56:30.734392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.459 [2024-11-27 11:56:30.734434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:04.459 [2024-11-27 11:56:30.734443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.459 [2024-11-27 11:56:30.736590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.459 [2024-11-27 11:56:30.736631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:04.459 [2024-11-27 11:56:30.736702] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:04.459 [2024-11-27 11:56:30.736758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.459 [2024-11-27 11:56:30.736932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:04.459 spare 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.459 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.459 [2024-11-27 11:56:30.836842] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:04.459 [2024-11-27 11:56:30.836918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:04.459 [2024-11-27 11:56:30.837066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:04.459 [2024-11-27 11:56:30.837251] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:04.459 [2024-11-27 11:56:30.837261] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:04.459 [2024-11-27 11:56:30.837414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.719 "name": "raid_bdev1", 00:18:04.719 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:18:04.719 "strip_size_kb": 0, 00:18:04.719 "state": "online", 00:18:04.719 "raid_level": "raid1", 00:18:04.719 "superblock": true, 00:18:04.719 "num_base_bdevs": 2, 00:18:04.719 "num_base_bdevs_discovered": 2, 00:18:04.719 "num_base_bdevs_operational": 2, 00:18:04.719 "base_bdevs_list": [ 00:18:04.719 { 00:18:04.719 "name": "spare", 00:18:04.719 "uuid": "46db7b7a-5e24-5835-8f2d-b63d24dee815", 00:18:04.719 "is_configured": true, 00:18:04.719 "data_offset": 256, 00:18:04.719 "data_size": 7936 00:18:04.719 }, 00:18:04.719 { 00:18:04.719 "name": "BaseBdev2", 00:18:04.719 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:18:04.719 "is_configured": true, 00:18:04.719 "data_offset": 256, 00:18:04.719 "data_size": 7936 00:18:04.719 } 00:18:04.719 ] 00:18:04.719 }' 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.719 11:56:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.979 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:04.979 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.979 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:04.979 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:04.979 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.979 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.979 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.979 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.979 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.979 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.979 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.979 "name": "raid_bdev1", 00:18:04.979 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:18:04.979 "strip_size_kb": 0, 00:18:04.979 "state": "online", 00:18:04.979 "raid_level": "raid1", 00:18:04.979 "superblock": true, 00:18:04.979 "num_base_bdevs": 2, 00:18:04.979 "num_base_bdevs_discovered": 2, 00:18:04.979 "num_base_bdevs_operational": 2, 00:18:04.979 "base_bdevs_list": [ 00:18:04.979 { 00:18:04.979 "name": "spare", 00:18:04.979 "uuid": "46db7b7a-5e24-5835-8f2d-b63d24dee815", 00:18:04.979 "is_configured": true, 00:18:04.979 "data_offset": 256, 00:18:04.979 "data_size": 7936 00:18:04.979 }, 00:18:04.979 { 00:18:04.979 "name": "BaseBdev2", 00:18:04.979 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:18:04.979 "is_configured": true, 00:18:04.979 "data_offset": 256, 00:18:04.979 "data_size": 7936 00:18:04.979 } 00:18:04.979 ] 00:18:04.979 }' 00:18:04.979 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.240 [2024-11-27 11:56:31.477150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.240 "name": "raid_bdev1", 00:18:05.240 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:18:05.240 "strip_size_kb": 0, 00:18:05.240 "state": "online", 00:18:05.240 "raid_level": "raid1", 00:18:05.240 "superblock": true, 00:18:05.240 "num_base_bdevs": 2, 00:18:05.240 "num_base_bdevs_discovered": 1, 00:18:05.240 "num_base_bdevs_operational": 1, 00:18:05.240 "base_bdevs_list": [ 00:18:05.240 { 00:18:05.240 "name": null, 00:18:05.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.240 "is_configured": false, 00:18:05.240 "data_offset": 0, 00:18:05.240 "data_size": 7936 00:18:05.240 }, 00:18:05.240 { 00:18:05.240 "name": "BaseBdev2", 00:18:05.240 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:18:05.240 "is_configured": true, 00:18:05.240 "data_offset": 256, 00:18:05.240 "data_size": 7936 00:18:05.240 } 00:18:05.240 ] 00:18:05.240 }' 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.240 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.810 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:05.810 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.810 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.810 [2024-11-27 11:56:31.920400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:05.810 [2024-11-27 11:56:31.920691] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:05.810 [2024-11-27 11:56:31.920763] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:05.810 [2024-11-27 11:56:31.920829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:05.810 [2024-11-27 11:56:31.935020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:05.810 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.810 11:56:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:05.810 [2024-11-27 11:56:31.937123] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:06.748 11:56:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.748 11:56:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.748 11:56:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.748 11:56:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.748 11:56:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.748 11:56:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.748 11:56:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.748 11:56:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.748 11:56:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.748 11:56:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.748 11:56:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.748 "name": "raid_bdev1", 00:18:06.748 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:18:06.748 "strip_size_kb": 0, 00:18:06.748 "state": "online", 00:18:06.748 "raid_level": "raid1", 00:18:06.748 "superblock": true, 00:18:06.748 "num_base_bdevs": 2, 00:18:06.748 "num_base_bdevs_discovered": 2, 00:18:06.748 "num_base_bdevs_operational": 2, 00:18:06.748 "process": { 00:18:06.748 "type": "rebuild", 00:18:06.748 "target": "spare", 00:18:06.748 "progress": { 00:18:06.748 "blocks": 2560, 00:18:06.748 "percent": 32 00:18:06.748 } 00:18:06.748 }, 00:18:06.748 "base_bdevs_list": [ 00:18:06.748 { 00:18:06.748 "name": "spare", 00:18:06.748 "uuid": "46db7b7a-5e24-5835-8f2d-b63d24dee815", 00:18:06.748 "is_configured": true, 00:18:06.748 "data_offset": 256, 00:18:06.748 "data_size": 7936 00:18:06.748 }, 00:18:06.748 { 00:18:06.748 "name": "BaseBdev2", 00:18:06.748 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:18:06.748 "is_configured": true, 00:18:06.748 "data_offset": 256, 00:18:06.748 "data_size": 7936 00:18:06.748 } 00:18:06.748 ] 00:18:06.748 }' 00:18:06.748 11:56:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.748 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.748 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.748 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.748 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:06.748 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.748 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.748 [2024-11-27 11:56:33.100828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.010 [2024-11-27 11:56:33.143207] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:07.010 [2024-11-27 11:56:33.143369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.010 [2024-11-27 11:56:33.143410] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.010 [2024-11-27 11:56:33.143472] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:07.010 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.010 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.010 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.010 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.010 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.010 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.010 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:07.010 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.010 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.010 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.010 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.010 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.010 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.010 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.010 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.010 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.011 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.011 "name": "raid_bdev1", 00:18:07.011 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:18:07.011 "strip_size_kb": 0, 00:18:07.011 "state": "online", 00:18:07.011 "raid_level": "raid1", 00:18:07.011 "superblock": true, 00:18:07.011 "num_base_bdevs": 2, 00:18:07.011 "num_base_bdevs_discovered": 1, 00:18:07.011 "num_base_bdevs_operational": 1, 00:18:07.011 "base_bdevs_list": [ 00:18:07.011 { 00:18:07.011 "name": null, 00:18:07.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.011 "is_configured": false, 00:18:07.011 "data_offset": 0, 00:18:07.011 "data_size": 7936 00:18:07.011 }, 00:18:07.011 { 00:18:07.011 "name": "BaseBdev2", 00:18:07.011 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:18:07.011 "is_configured": true, 00:18:07.011 "data_offset": 256, 00:18:07.011 "data_size": 7936 00:18:07.011 } 00:18:07.011 ] 00:18:07.011 }' 00:18:07.011 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.011 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.271 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:07.271 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.271 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.271 [2024-11-27 11:56:33.555406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:07.271 [2024-11-27 11:56:33.555542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.271 [2024-11-27 11:56:33.555595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:07.271 [2024-11-27 11:56:33.555628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.271 [2024-11-27 11:56:33.555960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.271 [2024-11-27 11:56:33.555989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:07.271 [2024-11-27 11:56:33.556063] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:07.271 [2024-11-27 11:56:33.556078] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:07.271 [2024-11-27 11:56:33.556088] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:07.271 [2024-11-27 11:56:33.556118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.271 [2024-11-27 11:56:33.570010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:07.271 spare 00:18:07.271 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.271 11:56:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:07.271 [2024-11-27 11:56:33.571976] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:08.210 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.210 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.210 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.210 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.210 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.210 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.210 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.210 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.210 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.469 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.469 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.469 "name": "raid_bdev1", 00:18:08.469 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:18:08.469 "strip_size_kb": 0, 00:18:08.469 "state": "online", 00:18:08.469 "raid_level": "raid1", 00:18:08.469 "superblock": true, 00:18:08.469 "num_base_bdevs": 2, 00:18:08.469 "num_base_bdevs_discovered": 2, 00:18:08.469 "num_base_bdevs_operational": 2, 00:18:08.469 "process": { 00:18:08.469 "type": "rebuild", 00:18:08.469 "target": "spare", 00:18:08.469 "progress": { 00:18:08.469 "blocks": 2560, 00:18:08.469 "percent": 32 00:18:08.469 } 00:18:08.469 }, 00:18:08.469 "base_bdevs_list": [ 00:18:08.469 { 00:18:08.469 "name": "spare", 00:18:08.469 "uuid": "46db7b7a-5e24-5835-8f2d-b63d24dee815", 00:18:08.469 "is_configured": true, 00:18:08.469 "data_offset": 256, 00:18:08.469 "data_size": 7936 00:18:08.469 }, 00:18:08.469 { 00:18:08.469 "name": "BaseBdev2", 00:18:08.469 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:18:08.469 "is_configured": true, 00:18:08.469 "data_offset": 256, 00:18:08.469 "data_size": 7936 00:18:08.469 } 00:18:08.469 ] 00:18:08.469 }' 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.470 [2024-11-27 11:56:34.736301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:08.470 [2024-11-27 11:56:34.778106] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:08.470 [2024-11-27 11:56:34.778172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.470 [2024-11-27 11:56:34.778190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:08.470 [2024-11-27 11:56:34.778197] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.470 "name": "raid_bdev1", 00:18:08.470 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:18:08.470 "strip_size_kb": 0, 00:18:08.470 "state": "online", 00:18:08.470 "raid_level": "raid1", 00:18:08.470 "superblock": true, 00:18:08.470 "num_base_bdevs": 2, 00:18:08.470 "num_base_bdevs_discovered": 1, 00:18:08.470 "num_base_bdevs_operational": 1, 00:18:08.470 "base_bdevs_list": [ 00:18:08.470 { 00:18:08.470 "name": null, 00:18:08.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.470 "is_configured": false, 00:18:08.470 "data_offset": 0, 00:18:08.470 "data_size": 7936 00:18:08.470 }, 00:18:08.470 { 00:18:08.470 "name": "BaseBdev2", 00:18:08.470 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:18:08.470 "is_configured": true, 00:18:08.470 "data_offset": 256, 00:18:08.470 "data_size": 7936 00:18:08.470 } 00:18:08.470 ] 00:18:08.470 }' 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.470 11:56:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.040 "name": "raid_bdev1", 00:18:09.040 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:18:09.040 "strip_size_kb": 0, 00:18:09.040 "state": "online", 00:18:09.040 "raid_level": "raid1", 00:18:09.040 "superblock": true, 00:18:09.040 "num_base_bdevs": 2, 00:18:09.040 "num_base_bdevs_discovered": 1, 00:18:09.040 "num_base_bdevs_operational": 1, 00:18:09.040 "base_bdevs_list": [ 00:18:09.040 { 00:18:09.040 "name": null, 00:18:09.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.040 "is_configured": false, 00:18:09.040 "data_offset": 0, 00:18:09.040 "data_size": 7936 00:18:09.040 }, 00:18:09.040 { 00:18:09.040 "name": "BaseBdev2", 00:18:09.040 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:18:09.040 "is_configured": true, 00:18:09.040 "data_offset": 256, 00:18:09.040 "data_size": 7936 00:18:09.040 } 00:18:09.040 ] 00:18:09.040 }' 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.040 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.040 [2024-11-27 11:56:35.421720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:09.040 [2024-11-27 11:56:35.421785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.040 [2024-11-27 11:56:35.421809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:09.040 [2024-11-27 11:56:35.421819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.040 [2024-11-27 11:56:35.422096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.040 [2024-11-27 11:56:35.422115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:09.040 [2024-11-27 11:56:35.422171] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:09.040 [2024-11-27 11:56:35.422186] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:09.040 [2024-11-27 11:56:35.422196] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:09.040 [2024-11-27 11:56:35.422208] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:09.300 BaseBdev1 00:18:09.300 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.300 11:56:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:10.238 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:10.238 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.238 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.238 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.238 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.238 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.238 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.238 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.238 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.238 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.238 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.238 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.238 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.238 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.238 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.238 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.238 "name": "raid_bdev1", 00:18:10.238 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:18:10.238 "strip_size_kb": 0, 00:18:10.238 "state": "online", 00:18:10.238 "raid_level": "raid1", 00:18:10.238 "superblock": true, 00:18:10.238 "num_base_bdevs": 2, 00:18:10.238 "num_base_bdevs_discovered": 1, 00:18:10.238 "num_base_bdevs_operational": 1, 00:18:10.238 "base_bdevs_list": [ 00:18:10.238 { 00:18:10.238 "name": null, 00:18:10.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.238 "is_configured": false, 00:18:10.238 "data_offset": 0, 00:18:10.238 "data_size": 7936 00:18:10.238 }, 00:18:10.238 { 00:18:10.238 "name": "BaseBdev2", 00:18:10.238 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:18:10.238 "is_configured": true, 00:18:10.238 "data_offset": 256, 00:18:10.238 "data_size": 7936 00:18:10.238 } 00:18:10.238 ] 00:18:10.238 }' 00:18:10.238 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.238 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.807 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:10.807 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.807 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:10.807 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:10.807 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.807 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.807 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.807 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.807 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.807 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.807 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.807 "name": "raid_bdev1", 00:18:10.807 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:18:10.807 "strip_size_kb": 0, 00:18:10.807 "state": "online", 00:18:10.807 "raid_level": "raid1", 00:18:10.807 "superblock": true, 00:18:10.807 "num_base_bdevs": 2, 00:18:10.807 "num_base_bdevs_discovered": 1, 00:18:10.807 "num_base_bdevs_operational": 1, 00:18:10.807 "base_bdevs_list": [ 00:18:10.807 { 00:18:10.807 "name": null, 00:18:10.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.807 "is_configured": false, 00:18:10.807 "data_offset": 0, 00:18:10.807 "data_size": 7936 00:18:10.807 }, 00:18:10.807 { 00:18:10.807 "name": "BaseBdev2", 00:18:10.807 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:18:10.807 "is_configured": true, 00:18:10.807 "data_offset": 256, 00:18:10.807 "data_size": 7936 00:18:10.807 } 00:18:10.807 ] 00:18:10.807 }' 00:18:10.807 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.807 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:10.807 11:56:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.807 11:56:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.807 11:56:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:10.807 11:56:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:10.807 11:56:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:10.807 11:56:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:10.807 11:56:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.807 11:56:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:10.807 11:56:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.807 11:56:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:10.807 11:56:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.807 11:56:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.807 [2024-11-27 11:56:37.031018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:10.807 [2024-11-27 11:56:37.031201] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:10.807 [2024-11-27 11:56:37.031216] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:10.807 request: 00:18:10.807 { 00:18:10.807 "base_bdev": "BaseBdev1", 00:18:10.807 "raid_bdev": "raid_bdev1", 00:18:10.807 "method": "bdev_raid_add_base_bdev", 00:18:10.807 "req_id": 1 00:18:10.807 } 00:18:10.807 Got JSON-RPC error response 00:18:10.807 response: 00:18:10.807 { 00:18:10.807 "code": -22, 00:18:10.807 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:10.807 } 00:18:10.807 11:56:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:10.808 11:56:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:10.808 11:56:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:10.808 11:56:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:10.808 11:56:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:10.808 11:56:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:11.746 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:11.746 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.746 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.746 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.746 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.746 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:11.746 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.746 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.746 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.746 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.746 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.746 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.746 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.746 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.746 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.746 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.746 "name": "raid_bdev1", 00:18:11.746 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:18:11.746 "strip_size_kb": 0, 00:18:11.746 "state": "online", 00:18:11.746 "raid_level": "raid1", 00:18:11.746 "superblock": true, 00:18:11.746 "num_base_bdevs": 2, 00:18:11.746 "num_base_bdevs_discovered": 1, 00:18:11.746 "num_base_bdevs_operational": 1, 00:18:11.746 "base_bdevs_list": [ 00:18:11.746 { 00:18:11.746 "name": null, 00:18:11.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.746 "is_configured": false, 00:18:11.746 "data_offset": 0, 00:18:11.746 "data_size": 7936 00:18:11.746 }, 00:18:11.746 { 00:18:11.746 "name": "BaseBdev2", 00:18:11.746 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:18:11.746 "is_configured": true, 00:18:11.746 "data_offset": 256, 00:18:11.746 "data_size": 7936 00:18:11.746 } 00:18:11.746 ] 00:18:11.746 }' 00:18:11.746 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.746 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.317 "name": "raid_bdev1", 00:18:12.317 "uuid": "39746380-b699-48ed-91ee-48d92038f357", 00:18:12.317 "strip_size_kb": 0, 00:18:12.317 "state": "online", 00:18:12.317 "raid_level": "raid1", 00:18:12.317 "superblock": true, 00:18:12.317 "num_base_bdevs": 2, 00:18:12.317 "num_base_bdevs_discovered": 1, 00:18:12.317 "num_base_bdevs_operational": 1, 00:18:12.317 "base_bdevs_list": [ 00:18:12.317 { 00:18:12.317 "name": null, 00:18:12.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.317 "is_configured": false, 00:18:12.317 "data_offset": 0, 00:18:12.317 "data_size": 7936 00:18:12.317 }, 00:18:12.317 { 00:18:12.317 "name": "BaseBdev2", 00:18:12.317 "uuid": "1124e629-e70d-532f-b5a5-f9ba5e924b96", 00:18:12.317 "is_configured": true, 00:18:12.317 "data_offset": 256, 00:18:12.317 "data_size": 7936 00:18:12.317 } 00:18:12.317 ] 00:18:12.317 }' 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87840 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87840 ']' 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87840 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87840 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87840' 00:18:12.317 killing process with pid 87840 00:18:12.317 Received shutdown signal, test time was about 60.000000 seconds 00:18:12.317 00:18:12.317 Latency(us) 00:18:12.317 [2024-11-27T11:56:38.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.317 [2024-11-27T11:56:38.702Z] =================================================================================================================== 00:18:12.317 [2024-11-27T11:56:38.702Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87840 00:18:12.317 [2024-11-27 11:56:38.634695] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:12.317 [2024-11-27 11:56:38.634849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.317 11:56:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87840 00:18:12.317 [2024-11-27 11:56:38.634901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.317 [2024-11-27 11:56:38.634913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:12.887 [2024-11-27 11:56:38.967963] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:13.823 11:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:13.823 00:18:13.823 real 0m19.926s 00:18:13.823 user 0m26.089s 00:18:13.823 sys 0m2.571s 00:18:13.823 11:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.823 ************************************ 00:18:13.823 END TEST raid_rebuild_test_sb_md_separate 00:18:13.823 ************************************ 00:18:13.823 11:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.823 11:56:40 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:13.823 11:56:40 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:13.823 11:56:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:13.823 11:56:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.823 11:56:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:13.823 ************************************ 00:18:13.823 START TEST raid_state_function_test_sb_md_interleaved 00:18:13.823 ************************************ 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:13.823 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:13.824 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:13.824 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88532 00:18:13.824 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:13.824 Process raid pid: 88532 00:18:13.824 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88532' 00:18:13.824 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88532 00:18:13.824 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88532 ']' 00:18:13.824 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.824 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.824 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.824 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.824 11:56:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.083 [2024-11-27 11:56:40.261210] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:18:14.083 [2024-11-27 11:56:40.261405] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.083 [2024-11-27 11:56:40.419910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.343 [2024-11-27 11:56:40.537573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.602 [2024-11-27 11:56:40.735869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:14.602 [2024-11-27 11:56:40.736008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.862 [2024-11-27 11:56:41.089742] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:14.862 [2024-11-27 11:56:41.089805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:14.862 [2024-11-27 11:56:41.089817] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.862 [2024-11-27 11:56:41.089829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.862 "name": "Existed_Raid", 00:18:14.862 "uuid": "8cfdd044-879e-4409-98cb-36cc87723919", 00:18:14.862 "strip_size_kb": 0, 00:18:14.862 "state": "configuring", 00:18:14.862 "raid_level": "raid1", 00:18:14.862 "superblock": true, 00:18:14.862 "num_base_bdevs": 2, 00:18:14.862 "num_base_bdevs_discovered": 0, 00:18:14.862 "num_base_bdevs_operational": 2, 00:18:14.862 "base_bdevs_list": [ 00:18:14.862 { 00:18:14.862 "name": "BaseBdev1", 00:18:14.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.862 "is_configured": false, 00:18:14.862 "data_offset": 0, 00:18:14.862 "data_size": 0 00:18:14.862 }, 00:18:14.862 { 00:18:14.862 "name": "BaseBdev2", 00:18:14.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.862 "is_configured": false, 00:18:14.862 "data_offset": 0, 00:18:14.862 "data_size": 0 00:18:14.862 } 00:18:14.862 ] 00:18:14.862 }' 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.862 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.430 [2024-11-27 11:56:41.544943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:15.430 [2024-11-27 11:56:41.544987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.430 [2024-11-27 11:56:41.556917] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:15.430 [2024-11-27 11:56:41.557019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:15.430 [2024-11-27 11:56:41.557057] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:15.430 [2024-11-27 11:56:41.557111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.430 [2024-11-27 11:56:41.608616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.430 BaseBdev1 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:15.430 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.431 [ 00:18:15.431 { 00:18:15.431 "name": "BaseBdev1", 00:18:15.431 "aliases": [ 00:18:15.431 "d5ee5881-db1d-4344-927b-dcb92f7095d7" 00:18:15.431 ], 00:18:15.431 "product_name": "Malloc disk", 00:18:15.431 "block_size": 4128, 00:18:15.431 "num_blocks": 8192, 00:18:15.431 "uuid": "d5ee5881-db1d-4344-927b-dcb92f7095d7", 00:18:15.431 "md_size": 32, 00:18:15.431 "md_interleave": true, 00:18:15.431 "dif_type": 0, 00:18:15.431 "assigned_rate_limits": { 00:18:15.431 "rw_ios_per_sec": 0, 00:18:15.431 "rw_mbytes_per_sec": 0, 00:18:15.431 "r_mbytes_per_sec": 0, 00:18:15.431 "w_mbytes_per_sec": 0 00:18:15.431 }, 00:18:15.431 "claimed": true, 00:18:15.431 "claim_type": "exclusive_write", 00:18:15.431 "zoned": false, 00:18:15.431 "supported_io_types": { 00:18:15.431 "read": true, 00:18:15.431 "write": true, 00:18:15.431 "unmap": true, 00:18:15.431 "flush": true, 00:18:15.431 "reset": true, 00:18:15.431 "nvme_admin": false, 00:18:15.431 "nvme_io": false, 00:18:15.431 "nvme_io_md": false, 00:18:15.431 "write_zeroes": true, 00:18:15.431 "zcopy": true, 00:18:15.431 "get_zone_info": false, 00:18:15.431 "zone_management": false, 00:18:15.431 "zone_append": false, 00:18:15.431 "compare": false, 00:18:15.431 "compare_and_write": false, 00:18:15.431 "abort": true, 00:18:15.431 "seek_hole": false, 00:18:15.431 "seek_data": false, 00:18:15.431 "copy": true, 00:18:15.431 "nvme_iov_md": false 00:18:15.431 }, 00:18:15.431 "memory_domains": [ 00:18:15.431 { 00:18:15.431 "dma_device_id": "system", 00:18:15.431 "dma_device_type": 1 00:18:15.431 }, 00:18:15.431 { 00:18:15.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.431 "dma_device_type": 2 00:18:15.431 } 00:18:15.431 ], 00:18:15.431 "driver_specific": {} 00:18:15.431 } 00:18:15.431 ] 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.431 "name": "Existed_Raid", 00:18:15.431 "uuid": "6ac47367-b7a7-4776-ac6e-2cf4e3c5073e", 00:18:15.431 "strip_size_kb": 0, 00:18:15.431 "state": "configuring", 00:18:15.431 "raid_level": "raid1", 00:18:15.431 "superblock": true, 00:18:15.431 "num_base_bdevs": 2, 00:18:15.431 "num_base_bdevs_discovered": 1, 00:18:15.431 "num_base_bdevs_operational": 2, 00:18:15.431 "base_bdevs_list": [ 00:18:15.431 { 00:18:15.431 "name": "BaseBdev1", 00:18:15.431 "uuid": "d5ee5881-db1d-4344-927b-dcb92f7095d7", 00:18:15.431 "is_configured": true, 00:18:15.431 "data_offset": 256, 00:18:15.431 "data_size": 7936 00:18:15.431 }, 00:18:15.431 { 00:18:15.431 "name": "BaseBdev2", 00:18:15.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.431 "is_configured": false, 00:18:15.431 "data_offset": 0, 00:18:15.431 "data_size": 0 00:18:15.431 } 00:18:15.431 ] 00:18:15.431 }' 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.431 11:56:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.999 [2024-11-27 11:56:42.123901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:15.999 [2024-11-27 11:56:42.123976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.999 [2024-11-27 11:56:42.135978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.999 [2024-11-27 11:56:42.138172] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:15.999 [2024-11-27 11:56:42.138289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.999 "name": "Existed_Raid", 00:18:15.999 "uuid": "d20ea330-2b8d-460c-ac2a-9876c15736bb", 00:18:15.999 "strip_size_kb": 0, 00:18:15.999 "state": "configuring", 00:18:15.999 "raid_level": "raid1", 00:18:15.999 "superblock": true, 00:18:15.999 "num_base_bdevs": 2, 00:18:15.999 "num_base_bdevs_discovered": 1, 00:18:15.999 "num_base_bdevs_operational": 2, 00:18:15.999 "base_bdevs_list": [ 00:18:15.999 { 00:18:15.999 "name": "BaseBdev1", 00:18:15.999 "uuid": "d5ee5881-db1d-4344-927b-dcb92f7095d7", 00:18:15.999 "is_configured": true, 00:18:15.999 "data_offset": 256, 00:18:15.999 "data_size": 7936 00:18:15.999 }, 00:18:15.999 { 00:18:15.999 "name": "BaseBdev2", 00:18:15.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.999 "is_configured": false, 00:18:15.999 "data_offset": 0, 00:18:15.999 "data_size": 0 00:18:15.999 } 00:18:15.999 ] 00:18:15.999 }' 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.999 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.258 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:16.258 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.258 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.518 [2024-11-27 11:56:42.664565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.518 [2024-11-27 11:56:42.664937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:16.518 [2024-11-27 11:56:42.664994] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:16.518 [2024-11-27 11:56:42.665116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:16.518 [2024-11-27 11:56:42.665237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:16.518 [2024-11-27 11:56:42.665281] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:16.518 [2024-11-27 11:56:42.665406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.518 BaseBdev2 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.518 [ 00:18:16.518 { 00:18:16.518 "name": "BaseBdev2", 00:18:16.518 "aliases": [ 00:18:16.518 "85132702-b8b6-4f7c-97c2-90ea98a8e9b8" 00:18:16.518 ], 00:18:16.518 "product_name": "Malloc disk", 00:18:16.518 "block_size": 4128, 00:18:16.518 "num_blocks": 8192, 00:18:16.518 "uuid": "85132702-b8b6-4f7c-97c2-90ea98a8e9b8", 00:18:16.518 "md_size": 32, 00:18:16.518 "md_interleave": true, 00:18:16.518 "dif_type": 0, 00:18:16.518 "assigned_rate_limits": { 00:18:16.518 "rw_ios_per_sec": 0, 00:18:16.518 "rw_mbytes_per_sec": 0, 00:18:16.518 "r_mbytes_per_sec": 0, 00:18:16.518 "w_mbytes_per_sec": 0 00:18:16.518 }, 00:18:16.518 "claimed": true, 00:18:16.518 "claim_type": "exclusive_write", 00:18:16.518 "zoned": false, 00:18:16.518 "supported_io_types": { 00:18:16.518 "read": true, 00:18:16.518 "write": true, 00:18:16.518 "unmap": true, 00:18:16.518 "flush": true, 00:18:16.518 "reset": true, 00:18:16.518 "nvme_admin": false, 00:18:16.518 "nvme_io": false, 00:18:16.518 "nvme_io_md": false, 00:18:16.518 "write_zeroes": true, 00:18:16.518 "zcopy": true, 00:18:16.518 "get_zone_info": false, 00:18:16.518 "zone_management": false, 00:18:16.518 "zone_append": false, 00:18:16.518 "compare": false, 00:18:16.518 "compare_and_write": false, 00:18:16.518 "abort": true, 00:18:16.518 "seek_hole": false, 00:18:16.518 "seek_data": false, 00:18:16.518 "copy": true, 00:18:16.518 "nvme_iov_md": false 00:18:16.518 }, 00:18:16.518 "memory_domains": [ 00:18:16.518 { 00:18:16.518 "dma_device_id": "system", 00:18:16.518 "dma_device_type": 1 00:18:16.518 }, 00:18:16.518 { 00:18:16.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.518 "dma_device_type": 2 00:18:16.518 } 00:18:16.518 ], 00:18:16.518 "driver_specific": {} 00:18:16.518 } 00:18:16.518 ] 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.518 "name": "Existed_Raid", 00:18:16.518 "uuid": "d20ea330-2b8d-460c-ac2a-9876c15736bb", 00:18:16.518 "strip_size_kb": 0, 00:18:16.518 "state": "online", 00:18:16.518 "raid_level": "raid1", 00:18:16.518 "superblock": true, 00:18:16.518 "num_base_bdevs": 2, 00:18:16.518 "num_base_bdevs_discovered": 2, 00:18:16.518 "num_base_bdevs_operational": 2, 00:18:16.518 "base_bdevs_list": [ 00:18:16.518 { 00:18:16.518 "name": "BaseBdev1", 00:18:16.518 "uuid": "d5ee5881-db1d-4344-927b-dcb92f7095d7", 00:18:16.518 "is_configured": true, 00:18:16.518 "data_offset": 256, 00:18:16.518 "data_size": 7936 00:18:16.518 }, 00:18:16.518 { 00:18:16.518 "name": "BaseBdev2", 00:18:16.518 "uuid": "85132702-b8b6-4f7c-97c2-90ea98a8e9b8", 00:18:16.518 "is_configured": true, 00:18:16.518 "data_offset": 256, 00:18:16.518 "data_size": 7936 00:18:16.518 } 00:18:16.518 ] 00:18:16.518 }' 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.518 11:56:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.103 [2024-11-27 11:56:43.188166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:17.103 "name": "Existed_Raid", 00:18:17.103 "aliases": [ 00:18:17.103 "d20ea330-2b8d-460c-ac2a-9876c15736bb" 00:18:17.103 ], 00:18:17.103 "product_name": "Raid Volume", 00:18:17.103 "block_size": 4128, 00:18:17.103 "num_blocks": 7936, 00:18:17.103 "uuid": "d20ea330-2b8d-460c-ac2a-9876c15736bb", 00:18:17.103 "md_size": 32, 00:18:17.103 "md_interleave": true, 00:18:17.103 "dif_type": 0, 00:18:17.103 "assigned_rate_limits": { 00:18:17.103 "rw_ios_per_sec": 0, 00:18:17.103 "rw_mbytes_per_sec": 0, 00:18:17.103 "r_mbytes_per_sec": 0, 00:18:17.103 "w_mbytes_per_sec": 0 00:18:17.103 }, 00:18:17.103 "claimed": false, 00:18:17.103 "zoned": false, 00:18:17.103 "supported_io_types": { 00:18:17.103 "read": true, 00:18:17.103 "write": true, 00:18:17.103 "unmap": false, 00:18:17.103 "flush": false, 00:18:17.103 "reset": true, 00:18:17.103 "nvme_admin": false, 00:18:17.103 "nvme_io": false, 00:18:17.103 "nvme_io_md": false, 00:18:17.103 "write_zeroes": true, 00:18:17.103 "zcopy": false, 00:18:17.103 "get_zone_info": false, 00:18:17.103 "zone_management": false, 00:18:17.103 "zone_append": false, 00:18:17.103 "compare": false, 00:18:17.103 "compare_and_write": false, 00:18:17.103 "abort": false, 00:18:17.103 "seek_hole": false, 00:18:17.103 "seek_data": false, 00:18:17.103 "copy": false, 00:18:17.103 "nvme_iov_md": false 00:18:17.103 }, 00:18:17.103 "memory_domains": [ 00:18:17.103 { 00:18:17.103 "dma_device_id": "system", 00:18:17.103 "dma_device_type": 1 00:18:17.103 }, 00:18:17.103 { 00:18:17.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.103 "dma_device_type": 2 00:18:17.103 }, 00:18:17.103 { 00:18:17.103 "dma_device_id": "system", 00:18:17.103 "dma_device_type": 1 00:18:17.103 }, 00:18:17.103 { 00:18:17.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.103 "dma_device_type": 2 00:18:17.103 } 00:18:17.103 ], 00:18:17.103 "driver_specific": { 00:18:17.103 "raid": { 00:18:17.103 "uuid": "d20ea330-2b8d-460c-ac2a-9876c15736bb", 00:18:17.103 "strip_size_kb": 0, 00:18:17.103 "state": "online", 00:18:17.103 "raid_level": "raid1", 00:18:17.103 "superblock": true, 00:18:17.103 "num_base_bdevs": 2, 00:18:17.103 "num_base_bdevs_discovered": 2, 00:18:17.103 "num_base_bdevs_operational": 2, 00:18:17.103 "base_bdevs_list": [ 00:18:17.103 { 00:18:17.103 "name": "BaseBdev1", 00:18:17.103 "uuid": "d5ee5881-db1d-4344-927b-dcb92f7095d7", 00:18:17.103 "is_configured": true, 00:18:17.103 "data_offset": 256, 00:18:17.103 "data_size": 7936 00:18:17.103 }, 00:18:17.103 { 00:18:17.103 "name": "BaseBdev2", 00:18:17.103 "uuid": "85132702-b8b6-4f7c-97c2-90ea98a8e9b8", 00:18:17.103 "is_configured": true, 00:18:17.103 "data_offset": 256, 00:18:17.103 "data_size": 7936 00:18:17.103 } 00:18:17.103 ] 00:18:17.103 } 00:18:17.103 } 00:18:17.103 }' 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:17.103 BaseBdev2' 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.103 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.103 [2024-11-27 11:56:43.415444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:17.363 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.363 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:17.363 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:17.363 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:17.363 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:17.363 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:17.363 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:17.363 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.363 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.363 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.363 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.363 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:17.363 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.363 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.363 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.363 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.363 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.364 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.364 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.364 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.364 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.364 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.364 "name": "Existed_Raid", 00:18:17.364 "uuid": "d20ea330-2b8d-460c-ac2a-9876c15736bb", 00:18:17.364 "strip_size_kb": 0, 00:18:17.364 "state": "online", 00:18:17.364 "raid_level": "raid1", 00:18:17.364 "superblock": true, 00:18:17.364 "num_base_bdevs": 2, 00:18:17.364 "num_base_bdevs_discovered": 1, 00:18:17.364 "num_base_bdevs_operational": 1, 00:18:17.364 "base_bdevs_list": [ 00:18:17.364 { 00:18:17.364 "name": null, 00:18:17.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.364 "is_configured": false, 00:18:17.364 "data_offset": 0, 00:18:17.364 "data_size": 7936 00:18:17.364 }, 00:18:17.364 { 00:18:17.364 "name": "BaseBdev2", 00:18:17.364 "uuid": "85132702-b8b6-4f7c-97c2-90ea98a8e9b8", 00:18:17.364 "is_configured": true, 00:18:17.364 "data_offset": 256, 00:18:17.364 "data_size": 7936 00:18:17.364 } 00:18:17.364 ] 00:18:17.364 }' 00:18:17.364 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.364 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.623 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:17.623 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:17.623 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.623 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.623 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.623 11:56:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.884 [2024-11-27 11:56:44.052776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:17.884 [2024-11-27 11:56:44.052974] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:17.884 [2024-11-27 11:56:44.157031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.884 [2024-11-27 11:56:44.157156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.884 [2024-11-27 11:56:44.157177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88532 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88532 ']' 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88532 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88532 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88532' 00:18:17.884 killing process with pid 88532 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88532 00:18:17.884 [2024-11-27 11:56:44.246969] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:17.884 11:56:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88532 00:18:17.884 [2024-11-27 11:56:44.266061] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:19.266 11:56:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:19.266 00:18:19.266 real 0m5.273s 00:18:19.266 user 0m7.597s 00:18:19.266 sys 0m0.873s 00:18:19.266 11:56:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:19.266 ************************************ 00:18:19.266 END TEST raid_state_function_test_sb_md_interleaved 00:18:19.266 ************************************ 00:18:19.266 11:56:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.266 11:56:45 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:19.266 11:56:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:19.266 11:56:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.266 11:56:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:19.266 ************************************ 00:18:19.266 START TEST raid_superblock_test_md_interleaved 00:18:19.266 ************************************ 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88784 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88784 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88784 ']' 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.267 11:56:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.267 [2024-11-27 11:56:45.598703] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:18:19.267 [2024-11-27 11:56:45.598919] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88784 ] 00:18:19.527 [2024-11-27 11:56:45.774400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.527 [2024-11-27 11:56:45.888218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.786 [2024-11-27 11:56:46.099006] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.786 [2024-11-27 11:56:46.099130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.357 malloc1 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.357 [2024-11-27 11:56:46.495045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:20.357 [2024-11-27 11:56:46.495154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.357 [2024-11-27 11:56:46.495203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:20.357 [2024-11-27 11:56:46.495214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.357 [2024-11-27 11:56:46.497345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.357 [2024-11-27 11:56:46.497395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:20.357 pt1 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.357 malloc2 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.357 [2024-11-27 11:56:46.547059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:20.357 [2024-11-27 11:56:46.547159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.357 [2024-11-27 11:56:46.547224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:20.357 [2024-11-27 11:56:46.547255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.357 [2024-11-27 11:56:46.549159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.357 [2024-11-27 11:56:46.549223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:20.357 pt2 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.357 [2024-11-27 11:56:46.559072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:20.357 [2024-11-27 11:56:46.560950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.357 [2024-11-27 11:56:46.561183] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:20.357 [2024-11-27 11:56:46.561232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:20.357 [2024-11-27 11:56:46.561332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:20.357 [2024-11-27 11:56:46.561440] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:20.357 [2024-11-27 11:56:46.561482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:20.357 [2024-11-27 11:56:46.561590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.357 "name": "raid_bdev1", 00:18:20.357 "uuid": "7f8b34dd-be57-47f5-a1ad-d5d93de58dcc", 00:18:20.357 "strip_size_kb": 0, 00:18:20.357 "state": "online", 00:18:20.357 "raid_level": "raid1", 00:18:20.357 "superblock": true, 00:18:20.357 "num_base_bdevs": 2, 00:18:20.357 "num_base_bdevs_discovered": 2, 00:18:20.357 "num_base_bdevs_operational": 2, 00:18:20.357 "base_bdevs_list": [ 00:18:20.357 { 00:18:20.357 "name": "pt1", 00:18:20.357 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:20.357 "is_configured": true, 00:18:20.357 "data_offset": 256, 00:18:20.357 "data_size": 7936 00:18:20.357 }, 00:18:20.357 { 00:18:20.357 "name": "pt2", 00:18:20.357 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.357 "is_configured": true, 00:18:20.357 "data_offset": 256, 00:18:20.357 "data_size": 7936 00:18:20.357 } 00:18:20.357 ] 00:18:20.357 }' 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.357 11:56:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.927 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:20.927 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:20.927 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:20.927 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:20.927 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:20.927 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:20.927 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:20.927 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:20.927 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.927 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.927 [2024-11-27 11:56:47.094482] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.927 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.927 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:20.927 "name": "raid_bdev1", 00:18:20.927 "aliases": [ 00:18:20.927 "7f8b34dd-be57-47f5-a1ad-d5d93de58dcc" 00:18:20.927 ], 00:18:20.927 "product_name": "Raid Volume", 00:18:20.927 "block_size": 4128, 00:18:20.927 "num_blocks": 7936, 00:18:20.927 "uuid": "7f8b34dd-be57-47f5-a1ad-d5d93de58dcc", 00:18:20.927 "md_size": 32, 00:18:20.927 "md_interleave": true, 00:18:20.927 "dif_type": 0, 00:18:20.927 "assigned_rate_limits": { 00:18:20.927 "rw_ios_per_sec": 0, 00:18:20.927 "rw_mbytes_per_sec": 0, 00:18:20.927 "r_mbytes_per_sec": 0, 00:18:20.927 "w_mbytes_per_sec": 0 00:18:20.927 }, 00:18:20.927 "claimed": false, 00:18:20.927 "zoned": false, 00:18:20.927 "supported_io_types": { 00:18:20.927 "read": true, 00:18:20.927 "write": true, 00:18:20.927 "unmap": false, 00:18:20.927 "flush": false, 00:18:20.927 "reset": true, 00:18:20.928 "nvme_admin": false, 00:18:20.928 "nvme_io": false, 00:18:20.928 "nvme_io_md": false, 00:18:20.928 "write_zeroes": true, 00:18:20.928 "zcopy": false, 00:18:20.928 "get_zone_info": false, 00:18:20.928 "zone_management": false, 00:18:20.928 "zone_append": false, 00:18:20.928 "compare": false, 00:18:20.928 "compare_and_write": false, 00:18:20.928 "abort": false, 00:18:20.928 "seek_hole": false, 00:18:20.928 "seek_data": false, 00:18:20.928 "copy": false, 00:18:20.928 "nvme_iov_md": false 00:18:20.928 }, 00:18:20.928 "memory_domains": [ 00:18:20.928 { 00:18:20.928 "dma_device_id": "system", 00:18:20.928 "dma_device_type": 1 00:18:20.928 }, 00:18:20.928 { 00:18:20.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.928 "dma_device_type": 2 00:18:20.928 }, 00:18:20.928 { 00:18:20.928 "dma_device_id": "system", 00:18:20.928 "dma_device_type": 1 00:18:20.928 }, 00:18:20.928 { 00:18:20.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.928 "dma_device_type": 2 00:18:20.928 } 00:18:20.928 ], 00:18:20.928 "driver_specific": { 00:18:20.928 "raid": { 00:18:20.928 "uuid": "7f8b34dd-be57-47f5-a1ad-d5d93de58dcc", 00:18:20.928 "strip_size_kb": 0, 00:18:20.928 "state": "online", 00:18:20.928 "raid_level": "raid1", 00:18:20.928 "superblock": true, 00:18:20.928 "num_base_bdevs": 2, 00:18:20.928 "num_base_bdevs_discovered": 2, 00:18:20.928 "num_base_bdevs_operational": 2, 00:18:20.928 "base_bdevs_list": [ 00:18:20.928 { 00:18:20.928 "name": "pt1", 00:18:20.928 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:20.928 "is_configured": true, 00:18:20.928 "data_offset": 256, 00:18:20.928 "data_size": 7936 00:18:20.928 }, 00:18:20.928 { 00:18:20.928 "name": "pt2", 00:18:20.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.928 "is_configured": true, 00:18:20.928 "data_offset": 256, 00:18:20.928 "data_size": 7936 00:18:20.928 } 00:18:20.928 ] 00:18:20.928 } 00:18:20.928 } 00:18:20.928 }' 00:18:20.928 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:20.928 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:20.928 pt2' 00:18:20.928 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.928 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:20.928 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:20.928 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:20.928 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.928 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.928 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.928 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.928 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:20.928 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:20.928 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:20.928 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.928 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:20.928 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.928 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.928 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.188 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.189 [2024-11-27 11:56:47.346095] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7f8b34dd-be57-47f5-a1ad-d5d93de58dcc 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 7f8b34dd-be57-47f5-a1ad-d5d93de58dcc ']' 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.189 [2024-11-27 11:56:47.389651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.189 [2024-11-27 11:56:47.389740] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.189 [2024-11-27 11:56:47.389887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.189 [2024-11-27 11:56:47.389983] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.189 [2024-11-27 11:56:47.390035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.189 [2024-11-27 11:56:47.509480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:21.189 [2024-11-27 11:56:47.511403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:21.189 [2024-11-27 11:56:47.511478] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:21.189 [2024-11-27 11:56:47.511536] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:21.189 [2024-11-27 11:56:47.511551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.189 [2024-11-27 11:56:47.511570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:21.189 request: 00:18:21.189 { 00:18:21.189 "name": "raid_bdev1", 00:18:21.189 "raid_level": "raid1", 00:18:21.189 "base_bdevs": [ 00:18:21.189 "malloc1", 00:18:21.189 "malloc2" 00:18:21.189 ], 00:18:21.189 "superblock": false, 00:18:21.189 "method": "bdev_raid_create", 00:18:21.189 "req_id": 1 00:18:21.189 } 00:18:21.189 Got JSON-RPC error response 00:18:21.189 response: 00:18:21.189 { 00:18:21.189 "code": -17, 00:18:21.189 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:21.189 } 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.189 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.449 [2024-11-27 11:56:47.573359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:21.449 [2024-11-27 11:56:47.573450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.449 [2024-11-27 11:56:47.573471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:21.449 [2024-11-27 11:56:47.573483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.449 [2024-11-27 11:56:47.575701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.449 [2024-11-27 11:56:47.575797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:21.449 [2024-11-27 11:56:47.575885] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:21.449 [2024-11-27 11:56:47.575974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:21.449 pt1 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.449 "name": "raid_bdev1", 00:18:21.449 "uuid": "7f8b34dd-be57-47f5-a1ad-d5d93de58dcc", 00:18:21.449 "strip_size_kb": 0, 00:18:21.449 "state": "configuring", 00:18:21.449 "raid_level": "raid1", 00:18:21.449 "superblock": true, 00:18:21.449 "num_base_bdevs": 2, 00:18:21.449 "num_base_bdevs_discovered": 1, 00:18:21.449 "num_base_bdevs_operational": 2, 00:18:21.449 "base_bdevs_list": [ 00:18:21.449 { 00:18:21.449 "name": "pt1", 00:18:21.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:21.449 "is_configured": true, 00:18:21.449 "data_offset": 256, 00:18:21.449 "data_size": 7936 00:18:21.449 }, 00:18:21.449 { 00:18:21.449 "name": null, 00:18:21.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.449 "is_configured": false, 00:18:21.449 "data_offset": 256, 00:18:21.449 "data_size": 7936 00:18:21.449 } 00:18:21.449 ] 00:18:21.449 }' 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.449 11:56:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.709 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:21.709 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:21.709 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:21.709 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:21.709 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.709 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.709 [2024-11-27 11:56:48.052548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:21.709 [2024-11-27 11:56:48.052698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.709 [2024-11-27 11:56:48.052758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:21.709 [2024-11-27 11:56:48.052797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.709 [2024-11-27 11:56:48.053047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.709 [2024-11-27 11:56:48.053105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:21.709 [2024-11-27 11:56:48.053195] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:21.709 [2024-11-27 11:56:48.053250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:21.709 [2024-11-27 11:56:48.053381] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:21.709 [2024-11-27 11:56:48.053426] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:21.709 [2024-11-27 11:56:48.053544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:21.709 [2024-11-27 11:56:48.053663] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:21.709 [2024-11-27 11:56:48.053701] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:21.709 [2024-11-27 11:56:48.053845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.709 pt2 00:18:21.709 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.710 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:21.710 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:21.710 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:21.710 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.710 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.710 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.710 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.710 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:21.710 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.710 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.710 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.710 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.710 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.710 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.710 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.710 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.710 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.969 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.969 "name": "raid_bdev1", 00:18:21.969 "uuid": "7f8b34dd-be57-47f5-a1ad-d5d93de58dcc", 00:18:21.969 "strip_size_kb": 0, 00:18:21.969 "state": "online", 00:18:21.969 "raid_level": "raid1", 00:18:21.969 "superblock": true, 00:18:21.969 "num_base_bdevs": 2, 00:18:21.969 "num_base_bdevs_discovered": 2, 00:18:21.969 "num_base_bdevs_operational": 2, 00:18:21.969 "base_bdevs_list": [ 00:18:21.969 { 00:18:21.969 "name": "pt1", 00:18:21.969 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:21.969 "is_configured": true, 00:18:21.969 "data_offset": 256, 00:18:21.969 "data_size": 7936 00:18:21.969 }, 00:18:21.969 { 00:18:21.969 "name": "pt2", 00:18:21.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.969 "is_configured": true, 00:18:21.969 "data_offset": 256, 00:18:21.969 "data_size": 7936 00:18:21.969 } 00:18:21.969 ] 00:18:21.969 }' 00:18:21.969 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.969 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.230 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:22.230 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:22.230 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:22.230 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:22.230 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:22.230 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:22.230 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:22.230 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.230 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:22.230 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.230 [2024-11-27 11:56:48.504112] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.230 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.230 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:22.230 "name": "raid_bdev1", 00:18:22.230 "aliases": [ 00:18:22.230 "7f8b34dd-be57-47f5-a1ad-d5d93de58dcc" 00:18:22.230 ], 00:18:22.230 "product_name": "Raid Volume", 00:18:22.230 "block_size": 4128, 00:18:22.230 "num_blocks": 7936, 00:18:22.230 "uuid": "7f8b34dd-be57-47f5-a1ad-d5d93de58dcc", 00:18:22.230 "md_size": 32, 00:18:22.230 "md_interleave": true, 00:18:22.230 "dif_type": 0, 00:18:22.230 "assigned_rate_limits": { 00:18:22.230 "rw_ios_per_sec": 0, 00:18:22.230 "rw_mbytes_per_sec": 0, 00:18:22.230 "r_mbytes_per_sec": 0, 00:18:22.230 "w_mbytes_per_sec": 0 00:18:22.230 }, 00:18:22.230 "claimed": false, 00:18:22.230 "zoned": false, 00:18:22.230 "supported_io_types": { 00:18:22.230 "read": true, 00:18:22.230 "write": true, 00:18:22.230 "unmap": false, 00:18:22.230 "flush": false, 00:18:22.230 "reset": true, 00:18:22.230 "nvme_admin": false, 00:18:22.230 "nvme_io": false, 00:18:22.230 "nvme_io_md": false, 00:18:22.230 "write_zeroes": true, 00:18:22.230 "zcopy": false, 00:18:22.230 "get_zone_info": false, 00:18:22.230 "zone_management": false, 00:18:22.230 "zone_append": false, 00:18:22.230 "compare": false, 00:18:22.230 "compare_and_write": false, 00:18:22.230 "abort": false, 00:18:22.230 "seek_hole": false, 00:18:22.230 "seek_data": false, 00:18:22.230 "copy": false, 00:18:22.230 "nvme_iov_md": false 00:18:22.230 }, 00:18:22.230 "memory_domains": [ 00:18:22.230 { 00:18:22.230 "dma_device_id": "system", 00:18:22.230 "dma_device_type": 1 00:18:22.230 }, 00:18:22.230 { 00:18:22.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.230 "dma_device_type": 2 00:18:22.230 }, 00:18:22.230 { 00:18:22.230 "dma_device_id": "system", 00:18:22.230 "dma_device_type": 1 00:18:22.230 }, 00:18:22.230 { 00:18:22.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.230 "dma_device_type": 2 00:18:22.230 } 00:18:22.230 ], 00:18:22.230 "driver_specific": { 00:18:22.230 "raid": { 00:18:22.230 "uuid": "7f8b34dd-be57-47f5-a1ad-d5d93de58dcc", 00:18:22.230 "strip_size_kb": 0, 00:18:22.230 "state": "online", 00:18:22.230 "raid_level": "raid1", 00:18:22.230 "superblock": true, 00:18:22.230 "num_base_bdevs": 2, 00:18:22.230 "num_base_bdevs_discovered": 2, 00:18:22.230 "num_base_bdevs_operational": 2, 00:18:22.230 "base_bdevs_list": [ 00:18:22.230 { 00:18:22.230 "name": "pt1", 00:18:22.230 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:22.230 "is_configured": true, 00:18:22.230 "data_offset": 256, 00:18:22.230 "data_size": 7936 00:18:22.230 }, 00:18:22.230 { 00:18:22.230 "name": "pt2", 00:18:22.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.230 "is_configured": true, 00:18:22.230 "data_offset": 256, 00:18:22.230 "data_size": 7936 00:18:22.230 } 00:18:22.230 ] 00:18:22.230 } 00:18:22.230 } 00:18:22.230 }' 00:18:22.230 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:22.230 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:22.230 pt2' 00:18:22.230 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:22.491 [2024-11-27 11:56:48.751710] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 7f8b34dd-be57-47f5-a1ad-d5d93de58dcc '!=' 7f8b34dd-be57-47f5-a1ad-d5d93de58dcc ']' 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.491 [2024-11-27 11:56:48.799368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.491 "name": "raid_bdev1", 00:18:22.491 "uuid": "7f8b34dd-be57-47f5-a1ad-d5d93de58dcc", 00:18:22.491 "strip_size_kb": 0, 00:18:22.491 "state": "online", 00:18:22.491 "raid_level": "raid1", 00:18:22.491 "superblock": true, 00:18:22.491 "num_base_bdevs": 2, 00:18:22.491 "num_base_bdevs_discovered": 1, 00:18:22.491 "num_base_bdevs_operational": 1, 00:18:22.491 "base_bdevs_list": [ 00:18:22.491 { 00:18:22.491 "name": null, 00:18:22.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.491 "is_configured": false, 00:18:22.491 "data_offset": 0, 00:18:22.491 "data_size": 7936 00:18:22.491 }, 00:18:22.491 { 00:18:22.491 "name": "pt2", 00:18:22.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.491 "is_configured": true, 00:18:22.491 "data_offset": 256, 00:18:22.491 "data_size": 7936 00:18:22.491 } 00:18:22.491 ] 00:18:22.491 }' 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.491 11:56:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.060 [2024-11-27 11:56:49.270480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:23.060 [2024-11-27 11:56:49.270513] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.060 [2024-11-27 11:56:49.270604] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.060 [2024-11-27 11:56:49.270656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.060 [2024-11-27 11:56:49.270668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.060 [2024-11-27 11:56:49.346356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:23.060 [2024-11-27 11:56:49.346468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.060 [2024-11-27 11:56:49.346529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:23.060 [2024-11-27 11:56:49.346566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.060 [2024-11-27 11:56:49.348741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.060 [2024-11-27 11:56:49.348831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:23.060 [2024-11-27 11:56:49.348956] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:23.060 [2024-11-27 11:56:49.349059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:23.060 [2024-11-27 11:56:49.349173] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:23.060 [2024-11-27 11:56:49.349220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:23.060 [2024-11-27 11:56:49.349358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:23.060 [2024-11-27 11:56:49.349498] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:23.060 [2024-11-27 11:56:49.349532] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:23.060 [2024-11-27 11:56:49.349636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.060 pt2 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.060 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.061 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.061 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.061 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.061 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.061 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.061 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.061 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.061 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.061 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.061 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.061 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.061 "name": "raid_bdev1", 00:18:23.061 "uuid": "7f8b34dd-be57-47f5-a1ad-d5d93de58dcc", 00:18:23.061 "strip_size_kb": 0, 00:18:23.061 "state": "online", 00:18:23.061 "raid_level": "raid1", 00:18:23.061 "superblock": true, 00:18:23.061 "num_base_bdevs": 2, 00:18:23.061 "num_base_bdevs_discovered": 1, 00:18:23.061 "num_base_bdevs_operational": 1, 00:18:23.061 "base_bdevs_list": [ 00:18:23.061 { 00:18:23.061 "name": null, 00:18:23.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.061 "is_configured": false, 00:18:23.061 "data_offset": 256, 00:18:23.061 "data_size": 7936 00:18:23.061 }, 00:18:23.061 { 00:18:23.061 "name": "pt2", 00:18:23.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.061 "is_configured": true, 00:18:23.061 "data_offset": 256, 00:18:23.061 "data_size": 7936 00:18:23.061 } 00:18:23.061 ] 00:18:23.061 }' 00:18:23.061 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.061 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.652 [2024-11-27 11:56:49.789570] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:23.652 [2024-11-27 11:56:49.789607] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.652 [2024-11-27 11:56:49.789699] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.652 [2024-11-27 11:56:49.789759] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.652 [2024-11-27 11:56:49.789769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.652 [2024-11-27 11:56:49.849496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:23.652 [2024-11-27 11:56:49.849615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.652 [2024-11-27 11:56:49.849657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:23.652 [2024-11-27 11:56:49.849667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.652 [2024-11-27 11:56:49.851759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.652 [2024-11-27 11:56:49.851810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:23.652 [2024-11-27 11:56:49.851887] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:23.652 [2024-11-27 11:56:49.851974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:23.652 [2024-11-27 11:56:49.852087] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:23.652 [2024-11-27 11:56:49.852098] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:23.652 [2024-11-27 11:56:49.852121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:23.652 [2024-11-27 11:56:49.852185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:23.652 [2024-11-27 11:56:49.852269] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:23.652 [2024-11-27 11:56:49.852278] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:23.652 [2024-11-27 11:56:49.852358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:23.652 [2024-11-27 11:56:49.852434] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:23.652 [2024-11-27 11:56:49.852452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:23.652 [2024-11-27 11:56:49.852533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.652 pt1 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.652 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.653 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.653 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.653 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.653 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.653 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.653 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.653 "name": "raid_bdev1", 00:18:23.653 "uuid": "7f8b34dd-be57-47f5-a1ad-d5d93de58dcc", 00:18:23.653 "strip_size_kb": 0, 00:18:23.653 "state": "online", 00:18:23.653 "raid_level": "raid1", 00:18:23.653 "superblock": true, 00:18:23.653 "num_base_bdevs": 2, 00:18:23.653 "num_base_bdevs_discovered": 1, 00:18:23.653 "num_base_bdevs_operational": 1, 00:18:23.653 "base_bdevs_list": [ 00:18:23.653 { 00:18:23.653 "name": null, 00:18:23.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.653 "is_configured": false, 00:18:23.653 "data_offset": 256, 00:18:23.653 "data_size": 7936 00:18:23.653 }, 00:18:23.653 { 00:18:23.653 "name": "pt2", 00:18:23.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.653 "is_configured": true, 00:18:23.653 "data_offset": 256, 00:18:23.653 "data_size": 7936 00:18:23.653 } 00:18:23.653 ] 00:18:23.653 }' 00:18:23.653 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.653 11:56:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:24.227 [2024-11-27 11:56:50.348951] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 7f8b34dd-be57-47f5-a1ad-d5d93de58dcc '!=' 7f8b34dd-be57-47f5-a1ad-d5d93de58dcc ']' 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88784 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88784 ']' 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88784 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88784 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.227 killing process with pid 88784 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88784' 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88784 00:18:24.227 [2024-11-27 11:56:50.425489] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:24.227 [2024-11-27 11:56:50.425601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.227 11:56:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88784 00:18:24.227 [2024-11-27 11:56:50.425660] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.227 [2024-11-27 11:56:50.425677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:24.486 [2024-11-27 11:56:50.645167] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:25.865 ************************************ 00:18:25.865 END TEST raid_superblock_test_md_interleaved 00:18:25.865 ************************************ 00:18:25.865 11:56:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:25.865 00:18:25.865 real 0m6.324s 00:18:25.865 user 0m9.553s 00:18:25.865 sys 0m1.142s 00:18:25.865 11:56:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.865 11:56:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.865 11:56:51 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:25.865 11:56:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:25.865 11:56:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.865 11:56:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.865 ************************************ 00:18:25.865 START TEST raid_rebuild_test_sb_md_interleaved 00:18:25.865 ************************************ 00:18:25.865 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:25.865 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:25.865 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:25.865 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:25.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89107 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89107 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89107 ']' 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.866 11:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:25.866 [2024-11-27 11:56:51.994725] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:18:25.866 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:25.866 Zero copy mechanism will not be used. 00:18:25.866 [2024-11-27 11:56:51.994950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89107 ] 00:18:25.866 [2024-11-27 11:56:52.169806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.125 [2024-11-27 11:56:52.294345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.383 [2024-11-27 11:56:52.509104] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.383 [2024-11-27 11:56:52.509183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.643 11:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.643 11:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:26.643 11:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:26.643 11:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:26.643 11:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.643 11:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.903 BaseBdev1_malloc 00:18:26.903 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.903 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:26.903 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.903 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.903 [2024-11-27 11:56:53.038708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:26.903 [2024-11-27 11:56:53.038794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.903 [2024-11-27 11:56:53.038820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:26.903 [2024-11-27 11:56:53.038848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.903 [2024-11-27 11:56:53.040976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.903 [2024-11-27 11:56:53.041024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:26.903 BaseBdev1 00:18:26.903 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.903 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:26.903 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:26.903 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.903 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.903 BaseBdev2_malloc 00:18:26.903 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.903 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:26.903 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.903 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.903 [2024-11-27 11:56:53.095791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:26.904 [2024-11-27 11:56:53.095893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.904 [2024-11-27 11:56:53.095943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:26.904 [2024-11-27 11:56:53.095961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.904 [2024-11-27 11:56:53.098064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.904 [2024-11-27 11:56:53.098111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:26.904 BaseBdev2 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.904 spare_malloc 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.904 spare_delay 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.904 [2024-11-27 11:56:53.178801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:26.904 [2024-11-27 11:56:53.178914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.904 [2024-11-27 11:56:53.178946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:26.904 [2024-11-27 11:56:53.178959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.904 [2024-11-27 11:56:53.181147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.904 [2024-11-27 11:56:53.181272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:26.904 spare 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.904 [2024-11-27 11:56:53.190866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:26.904 [2024-11-27 11:56:53.192957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:26.904 [2024-11-27 11:56:53.193197] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:26.904 [2024-11-27 11:56:53.193216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:26.904 [2024-11-27 11:56:53.193331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:26.904 [2024-11-27 11:56:53.193415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:26.904 [2024-11-27 11:56:53.193424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:26.904 [2024-11-27 11:56:53.193518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.904 "name": "raid_bdev1", 00:18:26.904 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:26.904 "strip_size_kb": 0, 00:18:26.904 "state": "online", 00:18:26.904 "raid_level": "raid1", 00:18:26.904 "superblock": true, 00:18:26.904 "num_base_bdevs": 2, 00:18:26.904 "num_base_bdevs_discovered": 2, 00:18:26.904 "num_base_bdevs_operational": 2, 00:18:26.904 "base_bdevs_list": [ 00:18:26.904 { 00:18:26.904 "name": "BaseBdev1", 00:18:26.904 "uuid": "58adf6b2-7645-5d8e-a9e9-f26c822fdc44", 00:18:26.904 "is_configured": true, 00:18:26.904 "data_offset": 256, 00:18:26.904 "data_size": 7936 00:18:26.904 }, 00:18:26.904 { 00:18:26.904 "name": "BaseBdev2", 00:18:26.904 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:26.904 "is_configured": true, 00:18:26.904 "data_offset": 256, 00:18:26.904 "data_size": 7936 00:18:26.904 } 00:18:26.904 ] 00:18:26.904 }' 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.904 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.472 [2024-11-27 11:56:53.662366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.472 [2024-11-27 11:56:53.745905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.472 "name": "raid_bdev1", 00:18:27.472 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:27.472 "strip_size_kb": 0, 00:18:27.472 "state": "online", 00:18:27.472 "raid_level": "raid1", 00:18:27.472 "superblock": true, 00:18:27.472 "num_base_bdevs": 2, 00:18:27.472 "num_base_bdevs_discovered": 1, 00:18:27.472 "num_base_bdevs_operational": 1, 00:18:27.472 "base_bdevs_list": [ 00:18:27.472 { 00:18:27.472 "name": null, 00:18:27.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.472 "is_configured": false, 00:18:27.472 "data_offset": 0, 00:18:27.472 "data_size": 7936 00:18:27.472 }, 00:18:27.472 { 00:18:27.472 "name": "BaseBdev2", 00:18:27.472 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:27.472 "is_configured": true, 00:18:27.472 "data_offset": 256, 00:18:27.472 "data_size": 7936 00:18:27.472 } 00:18:27.472 ] 00:18:27.472 }' 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.472 11:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.041 11:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:28.041 11:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.041 11:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.041 [2024-11-27 11:56:54.205122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:28.041 [2024-11-27 11:56:54.224617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:28.041 11:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.041 11:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:28.041 [2024-11-27 11:56:54.226806] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:28.981 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.981 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.981 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.981 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.981 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.981 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.981 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.981 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.981 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.981 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.981 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.981 "name": "raid_bdev1", 00:18:28.981 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:28.981 "strip_size_kb": 0, 00:18:28.981 "state": "online", 00:18:28.981 "raid_level": "raid1", 00:18:28.981 "superblock": true, 00:18:28.981 "num_base_bdevs": 2, 00:18:28.981 "num_base_bdevs_discovered": 2, 00:18:28.981 "num_base_bdevs_operational": 2, 00:18:28.981 "process": { 00:18:28.981 "type": "rebuild", 00:18:28.981 "target": "spare", 00:18:28.981 "progress": { 00:18:28.981 "blocks": 2560, 00:18:28.981 "percent": 32 00:18:28.981 } 00:18:28.981 }, 00:18:28.981 "base_bdevs_list": [ 00:18:28.981 { 00:18:28.981 "name": "spare", 00:18:28.982 "uuid": "5c9af54c-34ef-503d-9f6c-358d9e14a5f7", 00:18:28.982 "is_configured": true, 00:18:28.982 "data_offset": 256, 00:18:28.982 "data_size": 7936 00:18:28.982 }, 00:18:28.982 { 00:18:28.982 "name": "BaseBdev2", 00:18:28.982 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:28.982 "is_configured": true, 00:18:28.982 "data_offset": 256, 00:18:28.982 "data_size": 7936 00:18:28.982 } 00:18:28.982 ] 00:18:28.982 }' 00:18:28.982 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.982 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.982 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.242 [2024-11-27 11:56:55.390077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:29.242 [2024-11-27 11:56:55.433293] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:29.242 [2024-11-27 11:56:55.433390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.242 [2024-11-27 11:56:55.433407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:29.242 [2024-11-27 11:56:55.433420] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.242 "name": "raid_bdev1", 00:18:29.242 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:29.242 "strip_size_kb": 0, 00:18:29.242 "state": "online", 00:18:29.242 "raid_level": "raid1", 00:18:29.242 "superblock": true, 00:18:29.242 "num_base_bdevs": 2, 00:18:29.242 "num_base_bdevs_discovered": 1, 00:18:29.242 "num_base_bdevs_operational": 1, 00:18:29.242 "base_bdevs_list": [ 00:18:29.242 { 00:18:29.242 "name": null, 00:18:29.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.242 "is_configured": false, 00:18:29.242 "data_offset": 0, 00:18:29.242 "data_size": 7936 00:18:29.242 }, 00:18:29.242 { 00:18:29.242 "name": "BaseBdev2", 00:18:29.242 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:29.242 "is_configured": true, 00:18:29.242 "data_offset": 256, 00:18:29.242 "data_size": 7936 00:18:29.242 } 00:18:29.242 ] 00:18:29.242 }' 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.242 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.813 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:29.813 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.813 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:29.813 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:29.813 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.813 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.813 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.813 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.813 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.813 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.813 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.813 "name": "raid_bdev1", 00:18:29.813 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:29.813 "strip_size_kb": 0, 00:18:29.813 "state": "online", 00:18:29.813 "raid_level": "raid1", 00:18:29.813 "superblock": true, 00:18:29.813 "num_base_bdevs": 2, 00:18:29.813 "num_base_bdevs_discovered": 1, 00:18:29.813 "num_base_bdevs_operational": 1, 00:18:29.813 "base_bdevs_list": [ 00:18:29.813 { 00:18:29.813 "name": null, 00:18:29.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.813 "is_configured": false, 00:18:29.813 "data_offset": 0, 00:18:29.813 "data_size": 7936 00:18:29.813 }, 00:18:29.813 { 00:18:29.813 "name": "BaseBdev2", 00:18:29.813 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:29.813 "is_configured": true, 00:18:29.813 "data_offset": 256, 00:18:29.813 "data_size": 7936 00:18:29.813 } 00:18:29.813 ] 00:18:29.813 }' 00:18:29.813 11:56:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.813 11:56:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:29.813 11:56:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.813 11:56:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:29.813 11:56:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:29.813 11:56:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.813 11:56:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.813 [2024-11-27 11:56:56.067415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:29.813 [2024-11-27 11:56:56.085672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:29.813 11:56:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.813 11:56:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:29.813 [2024-11-27 11:56:56.087799] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:30.754 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.754 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.754 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.754 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.754 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.754 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.754 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.754 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.754 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.754 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.014 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.014 "name": "raid_bdev1", 00:18:31.014 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:31.015 "strip_size_kb": 0, 00:18:31.015 "state": "online", 00:18:31.015 "raid_level": "raid1", 00:18:31.015 "superblock": true, 00:18:31.015 "num_base_bdevs": 2, 00:18:31.015 "num_base_bdevs_discovered": 2, 00:18:31.015 "num_base_bdevs_operational": 2, 00:18:31.015 "process": { 00:18:31.015 "type": "rebuild", 00:18:31.015 "target": "spare", 00:18:31.015 "progress": { 00:18:31.015 "blocks": 2560, 00:18:31.015 "percent": 32 00:18:31.015 } 00:18:31.015 }, 00:18:31.015 "base_bdevs_list": [ 00:18:31.015 { 00:18:31.015 "name": "spare", 00:18:31.015 "uuid": "5c9af54c-34ef-503d-9f6c-358d9e14a5f7", 00:18:31.015 "is_configured": true, 00:18:31.015 "data_offset": 256, 00:18:31.015 "data_size": 7936 00:18:31.015 }, 00:18:31.015 { 00:18:31.015 "name": "BaseBdev2", 00:18:31.015 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:31.015 "is_configured": true, 00:18:31.015 "data_offset": 256, 00:18:31.015 "data_size": 7936 00:18:31.015 } 00:18:31.015 ] 00:18:31.015 }' 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:31.015 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=752 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.015 "name": "raid_bdev1", 00:18:31.015 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:31.015 "strip_size_kb": 0, 00:18:31.015 "state": "online", 00:18:31.015 "raid_level": "raid1", 00:18:31.015 "superblock": true, 00:18:31.015 "num_base_bdevs": 2, 00:18:31.015 "num_base_bdevs_discovered": 2, 00:18:31.015 "num_base_bdevs_operational": 2, 00:18:31.015 "process": { 00:18:31.015 "type": "rebuild", 00:18:31.015 "target": "spare", 00:18:31.015 "progress": { 00:18:31.015 "blocks": 2816, 00:18:31.015 "percent": 35 00:18:31.015 } 00:18:31.015 }, 00:18:31.015 "base_bdevs_list": [ 00:18:31.015 { 00:18:31.015 "name": "spare", 00:18:31.015 "uuid": "5c9af54c-34ef-503d-9f6c-358d9e14a5f7", 00:18:31.015 "is_configured": true, 00:18:31.015 "data_offset": 256, 00:18:31.015 "data_size": 7936 00:18:31.015 }, 00:18:31.015 { 00:18:31.015 "name": "BaseBdev2", 00:18:31.015 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:31.015 "is_configured": true, 00:18:31.015 "data_offset": 256, 00:18:31.015 "data_size": 7936 00:18:31.015 } 00:18:31.015 ] 00:18:31.015 }' 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.015 11:56:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:32.394 11:56:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:32.394 11:56:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.394 11:56:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.394 11:56:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.394 11:56:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.394 11:56:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.394 11:56:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.394 11:56:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.394 11:56:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.394 11:56:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.394 11:56:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.394 11:56:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.394 "name": "raid_bdev1", 00:18:32.394 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:32.394 "strip_size_kb": 0, 00:18:32.394 "state": "online", 00:18:32.394 "raid_level": "raid1", 00:18:32.394 "superblock": true, 00:18:32.394 "num_base_bdevs": 2, 00:18:32.394 "num_base_bdevs_discovered": 2, 00:18:32.394 "num_base_bdevs_operational": 2, 00:18:32.394 "process": { 00:18:32.394 "type": "rebuild", 00:18:32.394 "target": "spare", 00:18:32.394 "progress": { 00:18:32.394 "blocks": 5888, 00:18:32.394 "percent": 74 00:18:32.394 } 00:18:32.394 }, 00:18:32.394 "base_bdevs_list": [ 00:18:32.394 { 00:18:32.394 "name": "spare", 00:18:32.394 "uuid": "5c9af54c-34ef-503d-9f6c-358d9e14a5f7", 00:18:32.394 "is_configured": true, 00:18:32.394 "data_offset": 256, 00:18:32.394 "data_size": 7936 00:18:32.394 }, 00:18:32.394 { 00:18:32.394 "name": "BaseBdev2", 00:18:32.394 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:32.394 "is_configured": true, 00:18:32.394 "data_offset": 256, 00:18:32.394 "data_size": 7936 00:18:32.394 } 00:18:32.394 ] 00:18:32.394 }' 00:18:32.394 11:56:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.394 11:56:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:32.394 11:56:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.395 11:56:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.395 11:56:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:32.963 [2024-11-27 11:56:59.203869] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:32.964 [2024-11-27 11:56:59.203971] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:32.964 [2024-11-27 11:56:59.204105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.225 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:33.225 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.225 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.225 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.225 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.225 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.225 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.225 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.225 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.225 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.225 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.225 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.225 "name": "raid_bdev1", 00:18:33.225 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:33.225 "strip_size_kb": 0, 00:18:33.225 "state": "online", 00:18:33.225 "raid_level": "raid1", 00:18:33.225 "superblock": true, 00:18:33.225 "num_base_bdevs": 2, 00:18:33.225 "num_base_bdevs_discovered": 2, 00:18:33.225 "num_base_bdevs_operational": 2, 00:18:33.225 "base_bdevs_list": [ 00:18:33.225 { 00:18:33.225 "name": "spare", 00:18:33.225 "uuid": "5c9af54c-34ef-503d-9f6c-358d9e14a5f7", 00:18:33.225 "is_configured": true, 00:18:33.225 "data_offset": 256, 00:18:33.225 "data_size": 7936 00:18:33.225 }, 00:18:33.225 { 00:18:33.225 "name": "BaseBdev2", 00:18:33.225 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:33.225 "is_configured": true, 00:18:33.225 "data_offset": 256, 00:18:33.225 "data_size": 7936 00:18:33.225 } 00:18:33.225 ] 00:18:33.225 }' 00:18:33.225 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.485 "name": "raid_bdev1", 00:18:33.485 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:33.485 "strip_size_kb": 0, 00:18:33.485 "state": "online", 00:18:33.485 "raid_level": "raid1", 00:18:33.485 "superblock": true, 00:18:33.485 "num_base_bdevs": 2, 00:18:33.485 "num_base_bdevs_discovered": 2, 00:18:33.485 "num_base_bdevs_operational": 2, 00:18:33.485 "base_bdevs_list": [ 00:18:33.485 { 00:18:33.485 "name": "spare", 00:18:33.485 "uuid": "5c9af54c-34ef-503d-9f6c-358d9e14a5f7", 00:18:33.485 "is_configured": true, 00:18:33.485 "data_offset": 256, 00:18:33.485 "data_size": 7936 00:18:33.485 }, 00:18:33.485 { 00:18:33.485 "name": "BaseBdev2", 00:18:33.485 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:33.485 "is_configured": true, 00:18:33.485 "data_offset": 256, 00:18:33.485 "data_size": 7936 00:18:33.485 } 00:18:33.485 ] 00:18:33.485 }' 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.485 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.486 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.486 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:33.486 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.486 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.486 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.486 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.486 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.486 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.486 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.486 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.486 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.486 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.486 "name": "raid_bdev1", 00:18:33.486 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:33.486 "strip_size_kb": 0, 00:18:33.486 "state": "online", 00:18:33.486 "raid_level": "raid1", 00:18:33.486 "superblock": true, 00:18:33.486 "num_base_bdevs": 2, 00:18:33.486 "num_base_bdevs_discovered": 2, 00:18:33.486 "num_base_bdevs_operational": 2, 00:18:33.486 "base_bdevs_list": [ 00:18:33.486 { 00:18:33.486 "name": "spare", 00:18:33.486 "uuid": "5c9af54c-34ef-503d-9f6c-358d9e14a5f7", 00:18:33.486 "is_configured": true, 00:18:33.486 "data_offset": 256, 00:18:33.486 "data_size": 7936 00:18:33.486 }, 00:18:33.486 { 00:18:33.486 "name": "BaseBdev2", 00:18:33.486 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:33.486 "is_configured": true, 00:18:33.486 "data_offset": 256, 00:18:33.486 "data_size": 7936 00:18:33.486 } 00:18:33.486 ] 00:18:33.486 }' 00:18:33.486 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.486 11:56:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.055 [2024-11-27 11:57:00.285925] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.055 [2024-11-27 11:57:00.286052] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:34.055 [2024-11-27 11:57:00.286216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.055 [2024-11-27 11:57:00.286346] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.055 [2024-11-27 11:57:00.286410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.055 [2024-11-27 11:57:00.353794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:34.055 [2024-11-27 11:57:00.353946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.055 [2024-11-27 11:57:00.353980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:34.055 [2024-11-27 11:57:00.353992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.055 [2024-11-27 11:57:00.356337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.055 [2024-11-27 11:57:00.356382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:34.055 [2024-11-27 11:57:00.356460] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:34.055 [2024-11-27 11:57:00.356518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.055 [2024-11-27 11:57:00.356646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.055 spare 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.055 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.315 [2024-11-27 11:57:00.456563] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:34.315 [2024-11-27 11:57:00.456626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:34.315 [2024-11-27 11:57:00.456785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:34.315 [2024-11-27 11:57:00.456938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:34.315 [2024-11-27 11:57:00.456951] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:34.315 [2024-11-27 11:57:00.457069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.315 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.315 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:34.315 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.315 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.315 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.315 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.315 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.315 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.315 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.315 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.315 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.315 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.315 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.315 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.315 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.315 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.315 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.315 "name": "raid_bdev1", 00:18:34.315 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:34.315 "strip_size_kb": 0, 00:18:34.315 "state": "online", 00:18:34.315 "raid_level": "raid1", 00:18:34.315 "superblock": true, 00:18:34.315 "num_base_bdevs": 2, 00:18:34.315 "num_base_bdevs_discovered": 2, 00:18:34.315 "num_base_bdevs_operational": 2, 00:18:34.315 "base_bdevs_list": [ 00:18:34.315 { 00:18:34.315 "name": "spare", 00:18:34.315 "uuid": "5c9af54c-34ef-503d-9f6c-358d9e14a5f7", 00:18:34.315 "is_configured": true, 00:18:34.315 "data_offset": 256, 00:18:34.315 "data_size": 7936 00:18:34.315 }, 00:18:34.315 { 00:18:34.315 "name": "BaseBdev2", 00:18:34.315 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:34.315 "is_configured": true, 00:18:34.315 "data_offset": 256, 00:18:34.315 "data_size": 7936 00:18:34.316 } 00:18:34.316 ] 00:18:34.316 }' 00:18:34.316 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.316 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.575 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.575 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.575 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:34.575 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:34.575 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.575 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.575 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.575 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.575 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.575 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.576 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.576 "name": "raid_bdev1", 00:18:34.576 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:34.576 "strip_size_kb": 0, 00:18:34.576 "state": "online", 00:18:34.576 "raid_level": "raid1", 00:18:34.576 "superblock": true, 00:18:34.576 "num_base_bdevs": 2, 00:18:34.576 "num_base_bdevs_discovered": 2, 00:18:34.576 "num_base_bdevs_operational": 2, 00:18:34.576 "base_bdevs_list": [ 00:18:34.576 { 00:18:34.576 "name": "spare", 00:18:34.576 "uuid": "5c9af54c-34ef-503d-9f6c-358d9e14a5f7", 00:18:34.576 "is_configured": true, 00:18:34.576 "data_offset": 256, 00:18:34.576 "data_size": 7936 00:18:34.576 }, 00:18:34.576 { 00:18:34.576 "name": "BaseBdev2", 00:18:34.576 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:34.576 "is_configured": true, 00:18:34.576 "data_offset": 256, 00:18:34.576 "data_size": 7936 00:18:34.576 } 00:18:34.576 ] 00:18:34.576 }' 00:18:34.576 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.835 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:34.835 11:57:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.836 [2024-11-27 11:57:01.084702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.836 "name": "raid_bdev1", 00:18:34.836 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:34.836 "strip_size_kb": 0, 00:18:34.836 "state": "online", 00:18:34.836 "raid_level": "raid1", 00:18:34.836 "superblock": true, 00:18:34.836 "num_base_bdevs": 2, 00:18:34.836 "num_base_bdevs_discovered": 1, 00:18:34.836 "num_base_bdevs_operational": 1, 00:18:34.836 "base_bdevs_list": [ 00:18:34.836 { 00:18:34.836 "name": null, 00:18:34.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.836 "is_configured": false, 00:18:34.836 "data_offset": 0, 00:18:34.836 "data_size": 7936 00:18:34.836 }, 00:18:34.836 { 00:18:34.836 "name": "BaseBdev2", 00:18:34.836 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:34.836 "is_configured": true, 00:18:34.836 "data_offset": 256, 00:18:34.836 "data_size": 7936 00:18:34.836 } 00:18:34.836 ] 00:18:34.836 }' 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.836 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.405 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:35.405 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.405 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.405 [2024-11-27 11:57:01.567931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:35.405 [2024-11-27 11:57:01.568263] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:35.405 [2024-11-27 11:57:01.568339] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:35.405 [2024-11-27 11:57:01.568430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:35.405 [2024-11-27 11:57:01.585307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:35.405 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.405 11:57:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:35.405 [2024-11-27 11:57:01.587238] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:36.346 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.346 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.346 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.346 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.346 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.346 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.346 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.346 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.346 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.346 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.346 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.346 "name": "raid_bdev1", 00:18:36.346 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:36.346 "strip_size_kb": 0, 00:18:36.346 "state": "online", 00:18:36.346 "raid_level": "raid1", 00:18:36.346 "superblock": true, 00:18:36.346 "num_base_bdevs": 2, 00:18:36.346 "num_base_bdevs_discovered": 2, 00:18:36.346 "num_base_bdevs_operational": 2, 00:18:36.346 "process": { 00:18:36.346 "type": "rebuild", 00:18:36.346 "target": "spare", 00:18:36.346 "progress": { 00:18:36.346 "blocks": 2560, 00:18:36.346 "percent": 32 00:18:36.346 } 00:18:36.346 }, 00:18:36.346 "base_bdevs_list": [ 00:18:36.346 { 00:18:36.346 "name": "spare", 00:18:36.346 "uuid": "5c9af54c-34ef-503d-9f6c-358d9e14a5f7", 00:18:36.346 "is_configured": true, 00:18:36.346 "data_offset": 256, 00:18:36.346 "data_size": 7936 00:18:36.346 }, 00:18:36.346 { 00:18:36.346 "name": "BaseBdev2", 00:18:36.346 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:36.346 "is_configured": true, 00:18:36.346 "data_offset": 256, 00:18:36.346 "data_size": 7936 00:18:36.346 } 00:18:36.346 ] 00:18:36.346 }' 00:18:36.346 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.346 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.346 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.606 [2024-11-27 11:57:02.754690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:36.606 [2024-11-27 11:57:02.793028] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:36.606 [2024-11-27 11:57:02.793144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.606 [2024-11-27 11:57:02.793161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:36.606 [2024-11-27 11:57:02.793171] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.606 "name": "raid_bdev1", 00:18:36.606 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:36.606 "strip_size_kb": 0, 00:18:36.606 "state": "online", 00:18:36.606 "raid_level": "raid1", 00:18:36.606 "superblock": true, 00:18:36.606 "num_base_bdevs": 2, 00:18:36.606 "num_base_bdevs_discovered": 1, 00:18:36.606 "num_base_bdevs_operational": 1, 00:18:36.606 "base_bdevs_list": [ 00:18:36.606 { 00:18:36.606 "name": null, 00:18:36.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.606 "is_configured": false, 00:18:36.606 "data_offset": 0, 00:18:36.606 "data_size": 7936 00:18:36.606 }, 00:18:36.606 { 00:18:36.606 "name": "BaseBdev2", 00:18:36.606 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:36.606 "is_configured": true, 00:18:36.606 "data_offset": 256, 00:18:36.606 "data_size": 7936 00:18:36.606 } 00:18:36.606 ] 00:18:36.606 }' 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.606 11:57:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.867 11:57:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:36.867 11:57:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.867 11:57:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.867 [2024-11-27 11:57:03.224142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:36.867 [2024-11-27 11:57:03.224293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.867 [2024-11-27 11:57:03.224351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:36.867 [2024-11-27 11:57:03.224385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.867 [2024-11-27 11:57:03.224632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.867 [2024-11-27 11:57:03.224682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:36.867 [2024-11-27 11:57:03.224771] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:36.867 [2024-11-27 11:57:03.224810] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:36.867 [2024-11-27 11:57:03.224881] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:36.867 [2024-11-27 11:57:03.224927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:36.867 [2024-11-27 11:57:03.240829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:36.867 spare 00:18:36.867 11:57:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.867 [2024-11-27 11:57:03.242883] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:36.867 11:57:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.248 "name": "raid_bdev1", 00:18:38.248 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:38.248 "strip_size_kb": 0, 00:18:38.248 "state": "online", 00:18:38.248 "raid_level": "raid1", 00:18:38.248 "superblock": true, 00:18:38.248 "num_base_bdevs": 2, 00:18:38.248 "num_base_bdevs_discovered": 2, 00:18:38.248 "num_base_bdevs_operational": 2, 00:18:38.248 "process": { 00:18:38.248 "type": "rebuild", 00:18:38.248 "target": "spare", 00:18:38.248 "progress": { 00:18:38.248 "blocks": 2560, 00:18:38.248 "percent": 32 00:18:38.248 } 00:18:38.248 }, 00:18:38.248 "base_bdevs_list": [ 00:18:38.248 { 00:18:38.248 "name": "spare", 00:18:38.248 "uuid": "5c9af54c-34ef-503d-9f6c-358d9e14a5f7", 00:18:38.248 "is_configured": true, 00:18:38.248 "data_offset": 256, 00:18:38.248 "data_size": 7936 00:18:38.248 }, 00:18:38.248 { 00:18:38.248 "name": "BaseBdev2", 00:18:38.248 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:38.248 "is_configured": true, 00:18:38.248 "data_offset": 256, 00:18:38.248 "data_size": 7936 00:18:38.248 } 00:18:38.248 ] 00:18:38.248 }' 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.248 [2024-11-27 11:57:04.406855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:38.248 [2024-11-27 11:57:04.448943] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:38.248 [2024-11-27 11:57:04.449068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.248 [2024-11-27 11:57:04.449089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:38.248 [2024-11-27 11:57:04.449096] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.248 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.249 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.249 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.249 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.249 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.249 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.249 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.249 "name": "raid_bdev1", 00:18:38.249 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:38.249 "strip_size_kb": 0, 00:18:38.249 "state": "online", 00:18:38.249 "raid_level": "raid1", 00:18:38.249 "superblock": true, 00:18:38.249 "num_base_bdevs": 2, 00:18:38.249 "num_base_bdevs_discovered": 1, 00:18:38.249 "num_base_bdevs_operational": 1, 00:18:38.249 "base_bdevs_list": [ 00:18:38.249 { 00:18:38.249 "name": null, 00:18:38.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.249 "is_configured": false, 00:18:38.249 "data_offset": 0, 00:18:38.249 "data_size": 7936 00:18:38.249 }, 00:18:38.249 { 00:18:38.249 "name": "BaseBdev2", 00:18:38.249 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:38.249 "is_configured": true, 00:18:38.249 "data_offset": 256, 00:18:38.249 "data_size": 7936 00:18:38.249 } 00:18:38.249 ] 00:18:38.249 }' 00:18:38.249 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.249 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.816 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:38.816 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.816 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:38.816 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:38.816 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.816 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.816 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.816 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.816 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.816 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.816 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.816 "name": "raid_bdev1", 00:18:38.816 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:38.816 "strip_size_kb": 0, 00:18:38.816 "state": "online", 00:18:38.816 "raid_level": "raid1", 00:18:38.816 "superblock": true, 00:18:38.816 "num_base_bdevs": 2, 00:18:38.816 "num_base_bdevs_discovered": 1, 00:18:38.816 "num_base_bdevs_operational": 1, 00:18:38.816 "base_bdevs_list": [ 00:18:38.816 { 00:18:38.816 "name": null, 00:18:38.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.816 "is_configured": false, 00:18:38.816 "data_offset": 0, 00:18:38.816 "data_size": 7936 00:18:38.816 }, 00:18:38.816 { 00:18:38.816 "name": "BaseBdev2", 00:18:38.816 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:38.816 "is_configured": true, 00:18:38.816 "data_offset": 256, 00:18:38.816 "data_size": 7936 00:18:38.816 } 00:18:38.816 ] 00:18:38.816 }' 00:18:38.816 11:57:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.816 11:57:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:38.816 11:57:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.816 11:57:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:38.816 11:57:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:38.816 11:57:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.817 11:57:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.817 11:57:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.817 11:57:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:38.817 11:57:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.817 11:57:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.817 [2024-11-27 11:57:05.103221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:38.817 [2024-11-27 11:57:05.103338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.817 [2024-11-27 11:57:05.103381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:38.817 [2024-11-27 11:57:05.103390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.817 [2024-11-27 11:57:05.103579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.817 [2024-11-27 11:57:05.103594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:38.817 [2024-11-27 11:57:05.103645] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:38.817 [2024-11-27 11:57:05.103659] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:38.817 [2024-11-27 11:57:05.103669] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:38.817 [2024-11-27 11:57:05.103680] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:38.817 BaseBdev1 00:18:38.817 11:57:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.817 11:57:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:39.756 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:39.756 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.756 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.756 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.756 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.756 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:39.756 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.756 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.756 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.756 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.756 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.756 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.756 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.756 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.756 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.015 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.015 "name": "raid_bdev1", 00:18:40.015 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:40.015 "strip_size_kb": 0, 00:18:40.015 "state": "online", 00:18:40.015 "raid_level": "raid1", 00:18:40.015 "superblock": true, 00:18:40.015 "num_base_bdevs": 2, 00:18:40.015 "num_base_bdevs_discovered": 1, 00:18:40.015 "num_base_bdevs_operational": 1, 00:18:40.015 "base_bdevs_list": [ 00:18:40.015 { 00:18:40.015 "name": null, 00:18:40.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.015 "is_configured": false, 00:18:40.015 "data_offset": 0, 00:18:40.015 "data_size": 7936 00:18:40.015 }, 00:18:40.015 { 00:18:40.015 "name": "BaseBdev2", 00:18:40.015 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:40.015 "is_configured": true, 00:18:40.015 "data_offset": 256, 00:18:40.015 "data_size": 7936 00:18:40.015 } 00:18:40.015 ] 00:18:40.015 }' 00:18:40.015 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.015 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.275 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:40.275 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.275 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:40.275 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:40.275 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.275 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.275 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.275 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.275 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.275 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.275 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.275 "name": "raid_bdev1", 00:18:40.275 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:40.275 "strip_size_kb": 0, 00:18:40.275 "state": "online", 00:18:40.275 "raid_level": "raid1", 00:18:40.275 "superblock": true, 00:18:40.275 "num_base_bdevs": 2, 00:18:40.275 "num_base_bdevs_discovered": 1, 00:18:40.275 "num_base_bdevs_operational": 1, 00:18:40.275 "base_bdevs_list": [ 00:18:40.275 { 00:18:40.275 "name": null, 00:18:40.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.275 "is_configured": false, 00:18:40.275 "data_offset": 0, 00:18:40.276 "data_size": 7936 00:18:40.276 }, 00:18:40.276 { 00:18:40.276 "name": "BaseBdev2", 00:18:40.276 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:40.276 "is_configured": true, 00:18:40.276 "data_offset": 256, 00:18:40.276 "data_size": 7936 00:18:40.276 } 00:18:40.276 ] 00:18:40.276 }' 00:18:40.276 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.536 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:40.536 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.536 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:40.536 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:40.536 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:40.536 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:40.536 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:40.536 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.536 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:40.536 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.537 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:40.537 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.537 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.537 [2024-11-27 11:57:06.732681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:40.537 [2024-11-27 11:57:06.732877] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:40.537 [2024-11-27 11:57:06.732897] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:40.537 request: 00:18:40.537 { 00:18:40.537 "base_bdev": "BaseBdev1", 00:18:40.537 "raid_bdev": "raid_bdev1", 00:18:40.537 "method": "bdev_raid_add_base_bdev", 00:18:40.537 "req_id": 1 00:18:40.537 } 00:18:40.537 Got JSON-RPC error response 00:18:40.537 response: 00:18:40.537 { 00:18:40.537 "code": -22, 00:18:40.537 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:40.537 } 00:18:40.537 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:40.537 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:40.537 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:40.537 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:40.537 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:40.537 11:57:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:41.475 11:57:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:41.475 11:57:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.475 11:57:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.475 11:57:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.475 11:57:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.475 11:57:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:41.475 11:57:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.475 11:57:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.475 11:57:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.475 11:57:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.475 11:57:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.475 11:57:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.475 11:57:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.475 11:57:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.475 11:57:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.475 11:57:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.475 "name": "raid_bdev1", 00:18:41.475 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:41.475 "strip_size_kb": 0, 00:18:41.475 "state": "online", 00:18:41.475 "raid_level": "raid1", 00:18:41.475 "superblock": true, 00:18:41.475 "num_base_bdevs": 2, 00:18:41.475 "num_base_bdevs_discovered": 1, 00:18:41.475 "num_base_bdevs_operational": 1, 00:18:41.475 "base_bdevs_list": [ 00:18:41.475 { 00:18:41.475 "name": null, 00:18:41.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.475 "is_configured": false, 00:18:41.475 "data_offset": 0, 00:18:41.475 "data_size": 7936 00:18:41.475 }, 00:18:41.475 { 00:18:41.475 "name": "BaseBdev2", 00:18:41.475 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:41.475 "is_configured": true, 00:18:41.475 "data_offset": 256, 00:18:41.475 "data_size": 7936 00:18:41.475 } 00:18:41.476 ] 00:18:41.476 }' 00:18:41.476 11:57:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.476 11:57:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.044 "name": "raid_bdev1", 00:18:42.044 "uuid": "6b463064-ad84-4232-bad4-869982bdc229", 00:18:42.044 "strip_size_kb": 0, 00:18:42.044 "state": "online", 00:18:42.044 "raid_level": "raid1", 00:18:42.044 "superblock": true, 00:18:42.044 "num_base_bdevs": 2, 00:18:42.044 "num_base_bdevs_discovered": 1, 00:18:42.044 "num_base_bdevs_operational": 1, 00:18:42.044 "base_bdevs_list": [ 00:18:42.044 { 00:18:42.044 "name": null, 00:18:42.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.044 "is_configured": false, 00:18:42.044 "data_offset": 0, 00:18:42.044 "data_size": 7936 00:18:42.044 }, 00:18:42.044 { 00:18:42.044 "name": "BaseBdev2", 00:18:42.044 "uuid": "24783c3b-9852-5d9b-841c-17984bc8630b", 00:18:42.044 "is_configured": true, 00:18:42.044 "data_offset": 256, 00:18:42.044 "data_size": 7936 00:18:42.044 } 00:18:42.044 ] 00:18:42.044 }' 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89107 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89107 ']' 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89107 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89107 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:42.044 killing process with pid 89107 00:18:42.044 Received shutdown signal, test time was about 60.000000 seconds 00:18:42.044 00:18:42.044 Latency(us) 00:18:42.044 [2024-11-27T11:57:08.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.044 [2024-11-27T11:57:08.429Z] =================================================================================================================== 00:18:42.044 [2024-11-27T11:57:08.429Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89107' 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89107 00:18:42.044 [2024-11-27 11:57:08.406711] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:42.044 [2024-11-27 11:57:08.406854] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:42.044 11:57:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89107 00:18:42.044 [2024-11-27 11:57:08.406905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:42.044 [2024-11-27 11:57:08.406916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:42.614 [2024-11-27 11:57:08.706370] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:43.575 11:57:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:43.575 00:18:43.575 real 0m17.898s 00:18:43.575 user 0m23.721s 00:18:43.575 sys 0m1.697s 00:18:43.575 11:57:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.575 ************************************ 00:18:43.575 END TEST raid_rebuild_test_sb_md_interleaved 00:18:43.575 ************************************ 00:18:43.575 11:57:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.575 11:57:09 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:43.575 11:57:09 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:43.575 11:57:09 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89107 ']' 00:18:43.575 11:57:09 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89107 00:18:43.575 11:57:09 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:43.575 ************************************ 00:18:43.575 END TEST bdev_raid 00:18:43.575 ************************************ 00:18:43.575 00:18:43.575 real 12m14.803s 00:18:43.576 user 16m38.484s 00:18:43.576 sys 1m52.420s 00:18:43.576 11:57:09 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.576 11:57:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.576 11:57:09 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:43.576 11:57:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:43.576 11:57:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.576 11:57:09 -- common/autotest_common.sh@10 -- # set +x 00:18:43.858 ************************************ 00:18:43.858 START TEST spdkcli_raid 00:18:43.858 ************************************ 00:18:43.858 11:57:09 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:43.858 * Looking for test storage... 00:18:43.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:43.858 11:57:10 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:43.858 11:57:10 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:43.858 11:57:10 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:43.858 11:57:10 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.858 11:57:10 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:43.858 11:57:10 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.858 11:57:10 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:43.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.858 --rc genhtml_branch_coverage=1 00:18:43.858 --rc genhtml_function_coverage=1 00:18:43.858 --rc genhtml_legend=1 00:18:43.858 --rc geninfo_all_blocks=1 00:18:43.858 --rc geninfo_unexecuted_blocks=1 00:18:43.858 00:18:43.858 ' 00:18:43.858 11:57:10 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:43.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.858 --rc genhtml_branch_coverage=1 00:18:43.858 --rc genhtml_function_coverage=1 00:18:43.858 --rc genhtml_legend=1 00:18:43.858 --rc geninfo_all_blocks=1 00:18:43.858 --rc geninfo_unexecuted_blocks=1 00:18:43.858 00:18:43.858 ' 00:18:43.858 11:57:10 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:43.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.858 --rc genhtml_branch_coverage=1 00:18:43.858 --rc genhtml_function_coverage=1 00:18:43.858 --rc genhtml_legend=1 00:18:43.858 --rc geninfo_all_blocks=1 00:18:43.858 --rc geninfo_unexecuted_blocks=1 00:18:43.858 00:18:43.858 ' 00:18:43.858 11:57:10 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:43.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.858 --rc genhtml_branch_coverage=1 00:18:43.858 --rc genhtml_function_coverage=1 00:18:43.858 --rc genhtml_legend=1 00:18:43.858 --rc geninfo_all_blocks=1 00:18:43.858 --rc geninfo_unexecuted_blocks=1 00:18:43.858 00:18:43.858 ' 00:18:43.858 11:57:10 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:43.858 11:57:10 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:43.858 11:57:10 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:43.858 11:57:10 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:43.858 11:57:10 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:43.858 11:57:10 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:43.858 11:57:10 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:43.858 11:57:10 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:43.858 11:57:10 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:43.858 11:57:10 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:43.858 11:57:10 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:43.858 11:57:10 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:43.858 11:57:10 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:43.858 11:57:10 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:43.859 11:57:10 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:43.859 11:57:10 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.859 11:57:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.859 11:57:10 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:43.859 11:57:10 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89789 00:18:43.859 11:57:10 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:43.859 11:57:10 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89789 00:18:43.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.859 11:57:10 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89789 ']' 00:18:43.859 11:57:10 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.859 11:57:10 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.859 11:57:10 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.859 11:57:10 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.859 11:57:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.127 [2024-11-27 11:57:10.318927] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:18:44.127 [2024-11-27 11:57:10.319042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89789 ] 00:18:44.127 [2024-11-27 11:57:10.494003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:44.386 [2024-11-27 11:57:10.612942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.386 [2024-11-27 11:57:10.612985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.325 11:57:11 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.325 11:57:11 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:45.325 11:57:11 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:45.325 11:57:11 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.325 11:57:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:45.325 11:57:11 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:45.325 11:57:11 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.325 11:57:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:45.325 11:57:11 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:45.325 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:45.325 ' 00:18:46.704 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:46.704 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:46.963 11:57:13 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:46.963 11:57:13 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.963 11:57:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.963 11:57:13 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:46.963 11:57:13 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:46.963 11:57:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.963 11:57:13 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:46.963 ' 00:18:47.901 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:48.160 11:57:14 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:48.160 11:57:14 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:48.160 11:57:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.160 11:57:14 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:48.160 11:57:14 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.160 11:57:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.160 11:57:14 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:48.160 11:57:14 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:48.728 11:57:14 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:48.728 11:57:14 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:48.728 11:57:14 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:48.728 11:57:14 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:48.728 11:57:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.728 11:57:15 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:48.728 11:57:15 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.728 11:57:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.728 11:57:15 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:48.728 ' 00:18:49.666 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:49.926 11:57:16 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:49.926 11:57:16 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.926 11:57:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.926 11:57:16 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:49.926 11:57:16 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.926 11:57:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.926 11:57:16 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:49.926 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:49.926 ' 00:18:51.306 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:51.306 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:51.306 11:57:17 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:51.306 11:57:17 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:51.306 11:57:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.565 11:57:17 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89789 00:18:51.565 11:57:17 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89789 ']' 00:18:51.565 11:57:17 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89789 00:18:51.565 11:57:17 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:51.565 11:57:17 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.565 11:57:17 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89789 00:18:51.565 killing process with pid 89789 00:18:51.565 11:57:17 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:51.565 11:57:17 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:51.565 11:57:17 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89789' 00:18:51.565 11:57:17 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89789 00:18:51.565 11:57:17 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89789 00:18:54.103 11:57:20 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:54.103 11:57:20 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89789 ']' 00:18:54.103 11:57:20 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89789 00:18:54.103 11:57:20 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89789 ']' 00:18:54.103 11:57:20 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89789 00:18:54.103 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89789) - No such process 00:18:54.103 Process with pid 89789 is not found 00:18:54.103 11:57:20 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89789 is not found' 00:18:54.103 11:57:20 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:54.103 11:57:20 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:54.103 11:57:20 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:54.103 11:57:20 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:54.103 00:18:54.103 real 0m10.212s 00:18:54.103 user 0m21.043s 00:18:54.103 sys 0m1.146s 00:18:54.103 11:57:20 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.103 11:57:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:54.103 ************************************ 00:18:54.103 END TEST spdkcli_raid 00:18:54.103 ************************************ 00:18:54.103 11:57:20 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:54.103 11:57:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:54.103 11:57:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.103 11:57:20 -- common/autotest_common.sh@10 -- # set +x 00:18:54.103 ************************************ 00:18:54.103 START TEST blockdev_raid5f 00:18:54.103 ************************************ 00:18:54.103 11:57:20 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:54.103 * Looking for test storage... 00:18:54.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:54.103 11:57:20 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:54.103 11:57:20 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:18:54.103 11:57:20 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:54.103 11:57:20 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:54.103 11:57:20 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:54.104 11:57:20 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:54.104 11:57:20 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:54.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.104 --rc genhtml_branch_coverage=1 00:18:54.104 --rc genhtml_function_coverage=1 00:18:54.104 --rc genhtml_legend=1 00:18:54.104 --rc geninfo_all_blocks=1 00:18:54.104 --rc geninfo_unexecuted_blocks=1 00:18:54.104 00:18:54.104 ' 00:18:54.104 11:57:20 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:54.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.104 --rc genhtml_branch_coverage=1 00:18:54.104 --rc genhtml_function_coverage=1 00:18:54.104 --rc genhtml_legend=1 00:18:54.104 --rc geninfo_all_blocks=1 00:18:54.104 --rc geninfo_unexecuted_blocks=1 00:18:54.104 00:18:54.104 ' 00:18:54.104 11:57:20 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:54.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.104 --rc genhtml_branch_coverage=1 00:18:54.104 --rc genhtml_function_coverage=1 00:18:54.104 --rc genhtml_legend=1 00:18:54.104 --rc geninfo_all_blocks=1 00:18:54.104 --rc geninfo_unexecuted_blocks=1 00:18:54.104 00:18:54.104 ' 00:18:54.104 11:57:20 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:54.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.104 --rc genhtml_branch_coverage=1 00:18:54.104 --rc genhtml_function_coverage=1 00:18:54.104 --rc genhtml_legend=1 00:18:54.104 --rc geninfo_all_blocks=1 00:18:54.104 --rc geninfo_unexecuted_blocks=1 00:18:54.104 00:18:54.104 ' 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90069 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:54.104 11:57:20 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90069 00:18:54.104 11:57:20 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90069 ']' 00:18:54.104 11:57:20 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.363 11:57:20 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.363 11:57:20 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.363 11:57:20 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.363 11:57:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:54.363 [2024-11-27 11:57:20.581645] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:18:54.363 [2024-11-27 11:57:20.582303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90069 ] 00:18:54.622 [2024-11-27 11:57:20.752939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.622 [2024-11-27 11:57:20.864796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:55.560 11:57:21 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:18:55.560 11:57:21 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:18:55.560 11:57:21 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.560 Malloc0 00:18:55.560 Malloc1 00:18:55.560 Malloc2 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.560 11:57:21 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.560 11:57:21 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:18:55.560 11:57:21 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.560 11:57:21 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.560 11:57:21 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.560 11:57:21 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:18:55.560 11:57:21 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.560 11:57:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.560 11:57:21 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:18:55.817 11:57:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.817 11:57:21 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:18:55.817 11:57:21 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:18:55.817 11:57:21 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "73bf4d53-0f56-427d-82b5-53764ae91a34"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "73bf4d53-0f56-427d-82b5-53764ae91a34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "73bf4d53-0f56-427d-82b5-53764ae91a34",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "97bc1bd2-0bac-4d98-a4d9-5a03c0df5699",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "25e9d3e3-2209-4890-90b3-a68f53ba0ce6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "c96b488c-ecce-4db5-bb7e-267b848cb3b3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:55.817 11:57:21 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:18:55.817 11:57:21 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:18:55.817 11:57:21 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:18:55.817 11:57:21 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90069 00:18:55.817 11:57:21 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90069 ']' 00:18:55.817 11:57:21 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90069 00:18:55.817 11:57:21 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:55.818 11:57:21 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.818 11:57:21 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90069 00:18:55.818 killing process with pid 90069 00:18:55.818 11:57:22 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.818 11:57:22 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.818 11:57:22 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90069' 00:18:55.818 11:57:22 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90069 00:18:55.818 11:57:22 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90069 00:18:58.351 11:57:24 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:58.351 11:57:24 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:58.351 11:57:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:58.351 11:57:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.351 11:57:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:58.351 ************************************ 00:18:58.351 START TEST bdev_hello_world 00:18:58.351 ************************************ 00:18:58.351 11:57:24 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:58.609 [2024-11-27 11:57:24.794387] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:18:58.609 [2024-11-27 11:57:24.794504] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90136 ] 00:18:58.609 [2024-11-27 11:57:24.961901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.866 [2024-11-27 11:57:25.073307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.436 [2024-11-27 11:57:25.581888] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:59.436 [2024-11-27 11:57:25.581939] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:59.436 [2024-11-27 11:57:25.581954] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:59.436 [2024-11-27 11:57:25.582394] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:59.436 [2024-11-27 11:57:25.582519] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:59.436 [2024-11-27 11:57:25.582537] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:59.436 [2024-11-27 11:57:25.582581] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:59.436 00:18:59.436 [2024-11-27 11:57:25.582599] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:00.822 00:19:00.822 real 0m2.272s 00:19:00.822 user 0m1.911s 00:19:00.822 sys 0m0.239s 00:19:00.822 11:57:26 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.822 ************************************ 00:19:00.822 END TEST bdev_hello_world 00:19:00.822 ************************************ 00:19:00.822 11:57:26 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:00.822 11:57:27 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:00.822 11:57:27 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:00.822 11:57:27 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.822 11:57:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.822 ************************************ 00:19:00.822 START TEST bdev_bounds 00:19:00.822 ************************************ 00:19:00.822 11:57:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:00.822 11:57:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90178 00:19:00.822 11:57:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:00.822 11:57:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:00.822 Process bdevio pid: 90178 00:19:00.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.822 11:57:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90178' 00:19:00.822 11:57:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90178 00:19:00.822 11:57:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90178 ']' 00:19:00.822 11:57:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.822 11:57:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.822 11:57:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.822 11:57:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.822 11:57:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:00.822 [2024-11-27 11:57:27.125194] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:19:00.822 [2024-11-27 11:57:27.125313] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90178 ] 00:19:01.082 [2024-11-27 11:57:27.298320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:01.082 [2024-11-27 11:57:27.411881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.082 [2024-11-27 11:57:27.412064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.082 [2024-11-27 11:57:27.412130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.650 11:57:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.650 11:57:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:01.650 11:57:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:01.650 I/O targets: 00:19:01.650 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:01.650 00:19:01.650 00:19:01.650 CUnit - A unit testing framework for C - Version 2.1-3 00:19:01.650 http://cunit.sourceforge.net/ 00:19:01.650 00:19:01.650 00:19:01.650 Suite: bdevio tests on: raid5f 00:19:01.650 Test: blockdev write read block ...passed 00:19:01.650 Test: blockdev write zeroes read block ...passed 00:19:01.909 Test: blockdev write zeroes read no split ...passed 00:19:01.909 Test: blockdev write zeroes read split ...passed 00:19:01.909 Test: blockdev write zeroes read split partial ...passed 00:19:01.909 Test: blockdev reset ...passed 00:19:01.909 Test: blockdev write read 8 blocks ...passed 00:19:01.909 Test: blockdev write read size > 128k ...passed 00:19:01.909 Test: blockdev write read invalid size ...passed 00:19:01.909 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:01.909 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:01.909 Test: blockdev write read max offset ...passed 00:19:01.909 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:01.909 Test: blockdev writev readv 8 blocks ...passed 00:19:01.909 Test: blockdev writev readv 30 x 1block ...passed 00:19:01.909 Test: blockdev writev readv block ...passed 00:19:02.168 Test: blockdev writev readv size > 128k ...passed 00:19:02.168 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:02.168 Test: blockdev comparev and writev ...passed 00:19:02.168 Test: blockdev nvme passthru rw ...passed 00:19:02.168 Test: blockdev nvme passthru vendor specific ...passed 00:19:02.168 Test: blockdev nvme admin passthru ...passed 00:19:02.168 Test: blockdev copy ...passed 00:19:02.168 00:19:02.168 Run Summary: Type Total Ran Passed Failed Inactive 00:19:02.168 suites 1 1 n/a 0 0 00:19:02.168 tests 23 23 23 0 0 00:19:02.168 asserts 130 130 130 0 n/a 00:19:02.168 00:19:02.168 Elapsed time = 0.657 seconds 00:19:02.168 0 00:19:02.168 11:57:28 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90178 00:19:02.168 11:57:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90178 ']' 00:19:02.168 11:57:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90178 00:19:02.168 11:57:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:02.168 11:57:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.168 11:57:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90178 00:19:02.168 11:57:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:02.168 11:57:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:02.168 11:57:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90178' 00:19:02.168 killing process with pid 90178 00:19:02.168 11:57:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90178 00:19:02.168 11:57:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90178 00:19:03.547 11:57:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:03.547 00:19:03.547 real 0m2.757s 00:19:03.547 user 0m6.850s 00:19:03.547 sys 0m0.357s 00:19:03.547 11:57:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.547 ************************************ 00:19:03.547 END TEST bdev_bounds 00:19:03.547 ************************************ 00:19:03.547 11:57:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:03.548 11:57:29 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:03.548 11:57:29 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:03.548 11:57:29 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.548 11:57:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:03.548 ************************************ 00:19:03.548 START TEST bdev_nbd 00:19:03.548 ************************************ 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90238 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90238 /var/tmp/spdk-nbd.sock 00:19:03.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90238 ']' 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.548 11:57:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:03.809 [2024-11-27 11:57:29.959226] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:19:03.809 [2024-11-27 11:57:29.959433] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.809 [2024-11-27 11:57:30.118369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.069 [2024-11-27 11:57:30.226818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.638 11:57:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.638 11:57:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:04.638 11:57:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:04.638 11:57:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.638 11:57:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:04.638 11:57:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:04.638 11:57:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:04.638 11:57:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.638 11:57:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:04.638 11:57:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:04.638 11:57:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:04.638 11:57:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:04.638 11:57:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:04.638 11:57:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:04.638 11:57:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:04.638 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:04.638 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:04.898 1+0 records in 00:19:04.898 1+0 records out 00:19:04.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490976 s, 8.3 MB/s 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:04.898 { 00:19:04.898 "nbd_device": "/dev/nbd0", 00:19:04.898 "bdev_name": "raid5f" 00:19:04.898 } 00:19:04.898 ]' 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:04.898 { 00:19:04.898 "nbd_device": "/dev/nbd0", 00:19:04.898 "bdev_name": "raid5f" 00:19:04.898 } 00:19:04.898 ]' 00:19:04.898 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:05.158 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:05.158 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.158 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:05.158 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:05.158 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:05.158 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:05.158 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:05.158 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:05.158 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:05.158 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:05.158 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:05.158 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:05.417 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:05.676 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:05.676 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:05.676 11:57:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:05.676 /dev/nbd0 00:19:05.676 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:05.676 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:05.676 11:57:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:05.676 11:57:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:05.676 11:57:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:05.676 11:57:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:05.676 11:57:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:05.676 11:57:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:05.676 11:57:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:05.676 11:57:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:05.677 11:57:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:05.677 1+0 records in 00:19:05.677 1+0 records out 00:19:05.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353052 s, 11.6 MB/s 00:19:05.677 11:57:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:05.677 11:57:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:05.677 11:57:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:05.677 11:57:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:05.677 11:57:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:05.677 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:05.677 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:05.677 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:05.677 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.677 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:05.936 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:05.937 { 00:19:05.937 "nbd_device": "/dev/nbd0", 00:19:05.937 "bdev_name": "raid5f" 00:19:05.937 } 00:19:05.937 ]' 00:19:05.937 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:05.937 { 00:19:05.937 "nbd_device": "/dev/nbd0", 00:19:05.937 "bdev_name": "raid5f" 00:19:05.937 } 00:19:05.937 ]' 00:19:05.937 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:05.937 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:05.937 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:05.937 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:05.937 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:05.937 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:05.937 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:05.937 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:05.937 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:05.937 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:06.197 256+0 records in 00:19:06.197 256+0 records out 00:19:06.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129772 s, 80.8 MB/s 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:06.197 256+0 records in 00:19:06.197 256+0 records out 00:19:06.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.034033 s, 30.8 MB/s 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:06.197 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:06.457 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:06.457 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:06.457 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:06.457 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:06.457 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:06.457 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:06.457 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:06.457 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:06.457 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:06.457 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.457 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:06.717 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:06.717 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:06.717 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:06.717 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:06.717 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:06.717 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:06.717 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:06.717 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:06.717 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:06.717 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:06.717 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:06.717 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:06.717 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:06.717 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.717 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:06.717 11:57:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:06.977 malloc_lvol_verify 00:19:06.977 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:06.977 db1ed9d0-89eb-4567-8d7b-e01c6d39fea6 00:19:06.977 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:07.237 5e979369-3eb2-47d2-b505-7e56a2364b79 00:19:07.237 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:07.497 /dev/nbd0 00:19:07.497 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:07.497 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:07.497 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:07.497 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:07.497 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:07.497 mke2fs 1.47.0 (5-Feb-2023) 00:19:07.497 Discarding device blocks: 0/4096 done 00:19:07.497 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:07.497 00:19:07.497 Allocating group tables: 0/1 done 00:19:07.497 Writing inode tables: 0/1 done 00:19:07.497 Creating journal (1024 blocks): done 00:19:07.497 Writing superblocks and filesystem accounting information: 0/1 done 00:19:07.497 00:19:07.497 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:07.497 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:07.497 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:07.497 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:07.497 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:07.497 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:07.497 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:07.757 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:07.757 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:07.757 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:07.757 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:07.757 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:07.757 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:07.757 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:07.757 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:07.757 11:57:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90238 00:19:07.757 11:57:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90238 ']' 00:19:07.757 11:57:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90238 00:19:07.757 11:57:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:07.757 11:57:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.757 11:57:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90238 00:19:07.757 killing process with pid 90238 00:19:07.757 11:57:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.757 11:57:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.757 11:57:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90238' 00:19:07.757 11:57:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90238 00:19:07.757 11:57:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90238 00:19:09.138 11:57:35 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:09.138 00:19:09.138 real 0m5.649s 00:19:09.138 user 0m7.630s 00:19:09.138 sys 0m1.315s 00:19:09.138 11:57:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.138 ************************************ 00:19:09.138 END TEST bdev_nbd 00:19:09.138 ************************************ 00:19:09.138 11:57:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:09.398 11:57:35 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:09.398 11:57:35 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:19:09.398 11:57:35 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:19:09.398 11:57:35 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:09.398 11:57:35 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:09.398 11:57:35 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.398 11:57:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:09.398 ************************************ 00:19:09.398 START TEST bdev_fio 00:19:09.398 ************************************ 00:19:09.398 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:09.398 11:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:09.399 ************************************ 00:19:09.399 START TEST bdev_fio_rw_verify 00:19:09.399 ************************************ 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:09.399 11:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:09.658 11:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:09.659 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:09.659 fio-3.35 00:19:09.659 Starting 1 thread 00:19:21.877 00:19:21.877 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90442: Wed Nov 27 11:57:46 2024 00:19:21.877 read: IOPS=11.2k, BW=43.6MiB/s (45.7MB/s)(436MiB/10001msec) 00:19:21.877 slat (nsec): min=18605, max=58164, avg=21861.39, stdev=2359.59 00:19:21.877 clat (usec): min=9, max=336, avg=142.99, stdev=52.48 00:19:21.877 lat (usec): min=30, max=379, avg=164.85, stdev=52.95 00:19:21.877 clat percentiles (usec): 00:19:21.877 | 50.000th=[ 147], 99.000th=[ 253], 99.900th=[ 277], 99.990th=[ 310], 00:19:21.877 | 99.999th=[ 334] 00:19:21.877 write: IOPS=11.7k, BW=45.5MiB/s (47.7MB/s)(450MiB/9883msec); 0 zone resets 00:19:21.877 slat (usec): min=8, max=331, avg=18.12, stdev= 3.79 00:19:21.877 clat (usec): min=61, max=1639, avg=327.64, stdev=48.28 00:19:21.877 lat (usec): min=85, max=1971, avg=345.76, stdev=49.62 00:19:21.877 clat percentiles (usec): 00:19:21.877 | 50.000th=[ 330], 99.000th=[ 433], 99.900th=[ 603], 99.990th=[ 1123], 00:19:21.877 | 99.999th=[ 1549] 00:19:21.877 bw ( KiB/s): min=42600, max=50320, per=99.17%, avg=46226.53, stdev=2198.13, samples=19 00:19:21.877 iops : min=10650, max=12580, avg=11556.63, stdev=549.53, samples=19 00:19:21.877 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=12.02%, 250=39.28% 00:19:21.877 lat (usec) : 500=48.61%, 750=0.06%, 1000=0.02% 00:19:21.877 lat (msec) : 2=0.01% 00:19:21.877 cpu : usr=99.05%, sys=0.35%, ctx=20, majf=0, minf=9245 00:19:21.877 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.878 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.878 issued rwts: total=111628,115164,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.878 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:21.878 00:19:21.878 Run status group 0 (all jobs): 00:19:21.878 READ: bw=43.6MiB/s (45.7MB/s), 43.6MiB/s-43.6MiB/s (45.7MB/s-45.7MB/s), io=436MiB (457MB), run=10001-10001msec 00:19:21.878 WRITE: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=450MiB (472MB), run=9883-9883msec 00:19:22.447 ----------------------------------------------------- 00:19:22.447 Suppressions used: 00:19:22.447 count bytes template 00:19:22.447 1 7 /usr/src/fio/parse.c 00:19:22.447 72 6912 /usr/src/fio/iolog.c 00:19:22.447 1 8 libtcmalloc_minimal.so 00:19:22.447 1 904 libcrypto.so 00:19:22.447 ----------------------------------------------------- 00:19:22.447 00:19:22.447 00:19:22.447 real 0m12.851s 00:19:22.447 user 0m13.021s 00:19:22.447 sys 0m0.626s 00:19:22.447 ************************************ 00:19:22.447 END TEST bdev_fio_rw_verify 00:19:22.447 ************************************ 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "73bf4d53-0f56-427d-82b5-53764ae91a34"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "73bf4d53-0f56-427d-82b5-53764ae91a34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "73bf4d53-0f56-427d-82b5-53764ae91a34",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "97bc1bd2-0bac-4d98-a4d9-5a03c0df5699",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "25e9d3e3-2209-4890-90b3-a68f53ba0ce6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "c96b488c-ecce-4db5-bb7e-267b848cb3b3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:22.447 /home/vagrant/spdk_repo/spdk 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:22.447 00:19:22.447 real 0m13.123s 00:19:22.447 user 0m13.134s 00:19:22.447 sys 0m0.755s 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.447 ************************************ 00:19:22.447 END TEST bdev_fio 00:19:22.447 ************************************ 00:19:22.447 11:57:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:22.447 11:57:48 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:22.448 11:57:48 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:22.448 11:57:48 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:22.448 11:57:48 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.448 11:57:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:22.448 ************************************ 00:19:22.448 START TEST bdev_verify 00:19:22.448 ************************************ 00:19:22.448 11:57:48 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:22.706 [2024-11-27 11:57:48.861618] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:19:22.706 [2024-11-27 11:57:48.861796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90600 ] 00:19:22.706 [2024-11-27 11:57:49.042703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:22.980 [2024-11-27 11:57:49.158071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.980 [2024-11-27 11:57:49.158110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.550 Running I/O for 5 seconds... 00:19:25.421 15703.00 IOPS, 61.34 MiB/s [2024-11-27T11:57:52.745Z] 14010.00 IOPS, 54.73 MiB/s [2024-11-27T11:57:54.126Z] 12541.67 IOPS, 48.99 MiB/s [2024-11-27T11:57:55.065Z] 11779.25 IOPS, 46.01 MiB/s [2024-11-27T11:57:55.065Z] 11344.40 IOPS, 44.31 MiB/s 00:19:28.680 Latency(us) 00:19:28.680 [2024-11-27T11:57:55.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.680 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:28.680 Verification LBA range: start 0x0 length 0x2000 00:19:28.680 raid5f : 5.02 5987.23 23.39 0.00 0.00 32173.66 377.40 35944.64 00:19:28.680 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:28.680 Verification LBA range: start 0x2000 length 0x2000 00:19:28.680 raid5f : 5.03 5354.57 20.92 0.00 0.00 36029.43 262.93 34342.01 00:19:28.680 [2024-11-27T11:57:55.065Z] =================================================================================================================== 00:19:28.680 [2024-11-27T11:57:55.065Z] Total : 11341.80 44.30 0.00 0.00 33994.49 262.93 35944.64 00:19:30.063 ************************************ 00:19:30.063 END TEST bdev_verify 00:19:30.063 ************************************ 00:19:30.063 00:19:30.063 real 0m7.566s 00:19:30.063 user 0m13.953s 00:19:30.063 sys 0m0.302s 00:19:30.063 11:57:56 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.063 11:57:56 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:30.063 11:57:56 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:30.063 11:57:56 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:30.063 11:57:56 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.063 11:57:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:30.063 ************************************ 00:19:30.063 START TEST bdev_verify_big_io 00:19:30.063 ************************************ 00:19:30.063 11:57:56 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:30.322 [2024-11-27 11:57:56.487054] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:19:30.322 [2024-11-27 11:57:56.487238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90704 ] 00:19:30.322 [2024-11-27 11:57:56.662424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:30.582 [2024-11-27 11:57:56.802800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.582 [2024-11-27 11:57:56.802866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.153 Running I/O for 5 seconds... 00:19:33.500 633.00 IOPS, 39.56 MiB/s [2024-11-27T11:58:00.824Z] 634.00 IOPS, 39.62 MiB/s [2024-11-27T11:58:01.763Z] 676.00 IOPS, 42.25 MiB/s [2024-11-27T11:58:02.699Z] 666.25 IOPS, 41.64 MiB/s [2024-11-27T11:58:02.959Z] 710.00 IOPS, 44.38 MiB/s 00:19:36.574 Latency(us) 00:19:36.574 [2024-11-27T11:58:02.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.574 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:36.574 Verification LBA range: start 0x0 length 0x200 00:19:36.574 raid5f : 5.34 403.60 25.22 0.00 0.00 7971611.63 384.56 347999.02 00:19:36.574 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:36.574 Verification LBA range: start 0x200 length 0x200 00:19:36.574 raid5f : 5.44 303.33 18.96 0.00 0.00 10459247.91 241.47 446904.01 00:19:36.574 [2024-11-27T11:58:02.959Z] =================================================================================================================== 00:19:36.574 [2024-11-27T11:58:02.959Z] Total : 706.93 44.18 0.00 0.00 9049413.06 241.47 446904.01 00:19:38.481 00:19:38.481 real 0m8.027s 00:19:38.481 user 0m14.799s 00:19:38.481 sys 0m0.368s 00:19:38.481 11:58:04 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.481 11:58:04 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:38.481 ************************************ 00:19:38.481 END TEST bdev_verify_big_io 00:19:38.481 ************************************ 00:19:38.481 11:58:04 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:38.481 11:58:04 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:38.481 11:58:04 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:38.481 11:58:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:38.481 ************************************ 00:19:38.481 START TEST bdev_write_zeroes 00:19:38.481 ************************************ 00:19:38.481 11:58:04 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:38.481 [2024-11-27 11:58:04.580992] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:19:38.481 [2024-11-27 11:58:04.581104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90808 ] 00:19:38.481 [2024-11-27 11:58:04.757435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.740 [2024-11-27 11:58:04.894131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.324 Running I/O for 1 seconds... 00:19:40.261 27471.00 IOPS, 107.31 MiB/s 00:19:40.261 Latency(us) 00:19:40.261 [2024-11-27T11:58:06.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.261 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:40.261 raid5f : 1.01 27440.83 107.19 0.00 0.00 4650.32 1509.62 6324.65 00:19:40.261 [2024-11-27T11:58:06.646Z] =================================================================================================================== 00:19:40.261 [2024-11-27T11:58:06.646Z] Total : 27440.83 107.19 0.00 0.00 4650.32 1509.62 6324.65 00:19:42.193 00:19:42.193 real 0m3.577s 00:19:42.193 user 0m3.087s 00:19:42.193 sys 0m0.358s 00:19:42.193 11:58:08 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.193 11:58:08 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:42.193 ************************************ 00:19:42.193 END TEST bdev_write_zeroes 00:19:42.193 ************************************ 00:19:42.193 11:58:08 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:42.193 11:58:08 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:42.193 11:58:08 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.193 11:58:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:42.193 ************************************ 00:19:42.193 START TEST bdev_json_nonenclosed 00:19:42.193 ************************************ 00:19:42.193 11:58:08 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:42.193 [2024-11-27 11:58:08.231582] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:19:42.193 [2024-11-27 11:58:08.231884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90861 ] 00:19:42.193 [2024-11-27 11:58:08.401236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.193 [2024-11-27 11:58:08.540744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.193 [2024-11-27 11:58:08.540880] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:42.193 [2024-11-27 11:58:08.540915] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:42.193 [2024-11-27 11:58:08.540928] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:42.453 00:19:42.453 real 0m0.681s 00:19:42.453 user 0m0.422s 00:19:42.453 sys 0m0.153s 00:19:42.453 ************************************ 00:19:42.453 END TEST bdev_json_nonenclosed 00:19:42.453 ************************************ 00:19:42.453 11:58:08 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.453 11:58:08 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:42.714 11:58:08 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:42.714 11:58:08 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:42.714 11:58:08 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.714 11:58:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:42.714 ************************************ 00:19:42.714 START TEST bdev_json_nonarray 00:19:42.714 ************************************ 00:19:42.714 11:58:08 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:42.714 [2024-11-27 11:58:08.985000] Starting SPDK v25.01-pre git sha1 24f0cb4c3 / DPDK 24.03.0 initialization... 00:19:42.714 [2024-11-27 11:58:08.985193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90892 ] 00:19:42.974 [2024-11-27 11:58:09.167266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.974 [2024-11-27 11:58:09.305741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.975 [2024-11-27 11:58:09.305992] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:42.975 [2024-11-27 11:58:09.306062] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:42.975 [2024-11-27 11:58:09.306158] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:43.235 00:19:43.235 real 0m0.695s 00:19:43.235 user 0m0.437s 00:19:43.235 sys 0m0.152s 00:19:43.235 ************************************ 00:19:43.235 END TEST bdev_json_nonarray 00:19:43.235 ************************************ 00:19:43.235 11:58:09 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:43.235 11:58:09 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:43.495 11:58:09 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:19:43.495 11:58:09 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:19:43.495 11:58:09 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:19:43.495 11:58:09 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:19:43.495 11:58:09 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:19:43.495 11:58:09 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:43.495 11:58:09 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:43.495 11:58:09 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:43.495 11:58:09 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:43.495 11:58:09 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:43.495 11:58:09 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:43.495 00:19:43.495 real 0m49.419s 00:19:43.495 user 1m6.715s 00:19:43.495 sys 0m5.078s 00:19:43.495 ************************************ 00:19:43.495 11:58:09 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:43.495 11:58:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:43.495 END TEST blockdev_raid5f 00:19:43.495 ************************************ 00:19:43.495 11:58:09 -- spdk/autotest.sh@194 -- # uname -s 00:19:43.495 11:58:09 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:43.495 11:58:09 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:43.495 11:58:09 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:43.495 11:58:09 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:43.495 11:58:09 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:43.495 11:58:09 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:43.495 11:58:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:43.495 11:58:09 -- common/autotest_common.sh@10 -- # set +x 00:19:43.495 11:58:09 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:43.495 11:58:09 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:43.495 11:58:09 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:43.495 11:58:09 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:43.495 11:58:09 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:43.495 11:58:09 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:43.495 11:58:09 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:43.495 11:58:09 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:43.495 11:58:09 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:43.495 11:58:09 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:43.495 11:58:09 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:43.495 11:58:09 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:43.495 11:58:09 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:43.495 11:58:09 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:43.495 11:58:09 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:43.495 11:58:09 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:43.495 11:58:09 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:43.495 11:58:09 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:43.495 11:58:09 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:43.495 11:58:09 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:43.495 11:58:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:43.495 11:58:09 -- common/autotest_common.sh@10 -- # set +x 00:19:43.495 11:58:09 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:43.495 11:58:09 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:43.495 11:58:09 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:43.495 11:58:09 -- common/autotest_common.sh@10 -- # set +x 00:19:46.036 INFO: APP EXITING 00:19:46.036 INFO: killing all VMs 00:19:46.037 INFO: killing vhost app 00:19:46.037 INFO: EXIT DONE 00:19:46.037 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:46.037 Waiting for block devices as requested 00:19:46.297 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:46.297 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:47.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:47.236 Cleaning 00:19:47.236 Removing: /var/run/dpdk/spdk0/config 00:19:47.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:47.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:47.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:47.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:47.236 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:47.236 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:47.236 Removing: /dev/shm/spdk_tgt_trace.pid56905 00:19:47.236 Removing: /var/run/dpdk/spdk0 00:19:47.236 Removing: /var/run/dpdk/spdk_pid56664 00:19:47.236 Removing: /var/run/dpdk/spdk_pid56905 00:19:47.236 Removing: /var/run/dpdk/spdk_pid57134 00:19:47.236 Removing: /var/run/dpdk/spdk_pid57239 00:19:47.236 Removing: /var/run/dpdk/spdk_pid57289 00:19:47.236 Removing: /var/run/dpdk/spdk_pid57423 00:19:47.236 Removing: /var/run/dpdk/spdk_pid57441 00:19:47.236 Removing: /var/run/dpdk/spdk_pid57653 00:19:47.236 Removing: /var/run/dpdk/spdk_pid57765 00:19:47.236 Removing: /var/run/dpdk/spdk_pid57877 00:19:47.236 Removing: /var/run/dpdk/spdk_pid58004 00:19:47.236 Removing: /var/run/dpdk/spdk_pid58113 00:19:47.236 Removing: /var/run/dpdk/spdk_pid58152 00:19:47.236 Removing: /var/run/dpdk/spdk_pid58189 00:19:47.236 Removing: /var/run/dpdk/spdk_pid58265 00:19:47.236 Removing: /var/run/dpdk/spdk_pid58376 00:19:47.236 Removing: /var/run/dpdk/spdk_pid58818 00:19:47.236 Removing: /var/run/dpdk/spdk_pid58893 00:19:47.236 Removing: /var/run/dpdk/spdk_pid58967 00:19:47.236 Removing: /var/run/dpdk/spdk_pid58989 00:19:47.236 Removing: /var/run/dpdk/spdk_pid59139 00:19:47.236 Removing: /var/run/dpdk/spdk_pid59155 00:19:47.236 Removing: /var/run/dpdk/spdk_pid59301 00:19:47.236 Removing: /var/run/dpdk/spdk_pid59322 00:19:47.236 Removing: /var/run/dpdk/spdk_pid59386 00:19:47.236 Removing: /var/run/dpdk/spdk_pid59410 00:19:47.236 Removing: /var/run/dpdk/spdk_pid59474 00:19:47.236 Removing: /var/run/dpdk/spdk_pid59492 00:19:47.236 Removing: /var/run/dpdk/spdk_pid59700 00:19:47.236 Removing: /var/run/dpdk/spdk_pid59731 00:19:47.237 Removing: /var/run/dpdk/spdk_pid59820 00:19:47.237 Removing: /var/run/dpdk/spdk_pid61162 00:19:47.237 Removing: /var/run/dpdk/spdk_pid61374 00:19:47.237 Removing: /var/run/dpdk/spdk_pid61514 00:19:47.237 Removing: /var/run/dpdk/spdk_pid62157 00:19:47.237 Removing: /var/run/dpdk/spdk_pid62369 00:19:47.237 Removing: /var/run/dpdk/spdk_pid62514 00:19:47.237 Removing: /var/run/dpdk/spdk_pid63163 00:19:47.237 Removing: /var/run/dpdk/spdk_pid63488 00:19:47.237 Removing: /var/run/dpdk/spdk_pid63628 00:19:47.237 Removing: /var/run/dpdk/spdk_pid65024 00:19:47.237 Removing: /var/run/dpdk/spdk_pid65284 00:19:47.237 Removing: /var/run/dpdk/spdk_pid65428 00:19:47.237 Removing: /var/run/dpdk/spdk_pid66830 00:19:47.237 Removing: /var/run/dpdk/spdk_pid67089 00:19:47.237 Removing: /var/run/dpdk/spdk_pid67234 00:19:47.237 Removing: /var/run/dpdk/spdk_pid68637 00:19:47.237 Removing: /var/run/dpdk/spdk_pid69083 00:19:47.497 Removing: /var/run/dpdk/spdk_pid69235 00:19:47.497 Removing: /var/run/dpdk/spdk_pid70722 00:19:47.497 Removing: /var/run/dpdk/spdk_pid70986 00:19:47.497 Removing: /var/run/dpdk/spdk_pid71132 00:19:47.497 Removing: /var/run/dpdk/spdk_pid72629 00:19:47.497 Removing: /var/run/dpdk/spdk_pid72892 00:19:47.497 Removing: /var/run/dpdk/spdk_pid73038 00:19:47.497 Removing: /var/run/dpdk/spdk_pid74529 00:19:47.497 Removing: /var/run/dpdk/spdk_pid75016 00:19:47.497 Removing: /var/run/dpdk/spdk_pid75162 00:19:47.497 Removing: /var/run/dpdk/spdk_pid75300 00:19:47.497 Removing: /var/run/dpdk/spdk_pid75724 00:19:47.497 Removing: /var/run/dpdk/spdk_pid76453 00:19:47.497 Removing: /var/run/dpdk/spdk_pid76849 00:19:47.497 Removing: /var/run/dpdk/spdk_pid77557 00:19:47.497 Removing: /var/run/dpdk/spdk_pid77999 00:19:47.497 Removing: /var/run/dpdk/spdk_pid78764 00:19:47.497 Removing: /var/run/dpdk/spdk_pid79173 00:19:47.497 Removing: /var/run/dpdk/spdk_pid81171 00:19:47.497 Removing: /var/run/dpdk/spdk_pid81616 00:19:47.497 Removing: /var/run/dpdk/spdk_pid82063 00:19:47.497 Removing: /var/run/dpdk/spdk_pid84191 00:19:47.497 Removing: /var/run/dpdk/spdk_pid84672 00:19:47.497 Removing: /var/run/dpdk/spdk_pid85194 00:19:47.497 Removing: /var/run/dpdk/spdk_pid86251 00:19:47.497 Removing: /var/run/dpdk/spdk_pid86574 00:19:47.497 Removing: /var/run/dpdk/spdk_pid87512 00:19:47.497 Removing: /var/run/dpdk/spdk_pid87840 00:19:47.497 Removing: /var/run/dpdk/spdk_pid88784 00:19:47.497 Removing: /var/run/dpdk/spdk_pid89107 00:19:47.497 Removing: /var/run/dpdk/spdk_pid89789 00:19:47.497 Removing: /var/run/dpdk/spdk_pid90069 00:19:47.497 Removing: /var/run/dpdk/spdk_pid90136 00:19:47.497 Removing: /var/run/dpdk/spdk_pid90178 00:19:47.497 Removing: /var/run/dpdk/spdk_pid90427 00:19:47.497 Removing: /var/run/dpdk/spdk_pid90600 00:19:47.497 Removing: /var/run/dpdk/spdk_pid90704 00:19:47.497 Removing: /var/run/dpdk/spdk_pid90808 00:19:47.497 Removing: /var/run/dpdk/spdk_pid90861 00:19:47.497 Removing: /var/run/dpdk/spdk_pid90892 00:19:47.497 Clean 00:19:47.497 11:58:13 -- common/autotest_common.sh@1453 -- # return 0 00:19:47.497 11:58:13 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:47.497 11:58:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:47.497 11:58:13 -- common/autotest_common.sh@10 -- # set +x 00:19:47.756 11:58:13 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:47.756 11:58:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:47.756 11:58:13 -- common/autotest_common.sh@10 -- # set +x 00:19:47.756 11:58:13 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:47.756 11:58:13 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:47.756 11:58:13 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:47.756 11:58:13 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:47.756 11:58:13 -- spdk/autotest.sh@398 -- # hostname 00:19:47.756 11:58:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:48.016 geninfo: WARNING: invalid characters removed from testname! 00:20:10.068 11:58:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:12.609 11:58:38 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:14.514 11:58:40 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:16.437 11:58:42 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:18.371 11:58:44 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:20.277 11:58:46 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:22.814 11:58:48 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:22.814 11:58:48 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:22.814 11:58:48 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:22.814 11:58:48 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:22.814 11:58:48 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:22.814 11:58:48 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:22.814 + [[ -n 5436 ]] 00:20:22.814 + sudo kill 5436 00:20:22.823 [Pipeline] } 00:20:22.839 [Pipeline] // timeout 00:20:22.844 [Pipeline] } 00:20:22.859 [Pipeline] // stage 00:20:22.864 [Pipeline] } 00:20:22.879 [Pipeline] // catchError 00:20:22.888 [Pipeline] stage 00:20:22.891 [Pipeline] { (Stop VM) 00:20:22.903 [Pipeline] sh 00:20:23.186 + vagrant halt 00:20:25.721 ==> default: Halting domain... 00:20:32.309 [Pipeline] sh 00:20:32.595 + vagrant destroy -f 00:20:35.132 ==> default: Removing domain... 00:20:35.144 [Pipeline] sh 00:20:35.431 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:35.440 [Pipeline] } 00:20:35.458 [Pipeline] // stage 00:20:35.463 [Pipeline] } 00:20:35.477 [Pipeline] // dir 00:20:35.482 [Pipeline] } 00:20:35.496 [Pipeline] // wrap 00:20:35.501 [Pipeline] } 00:20:35.513 [Pipeline] // catchError 00:20:35.522 [Pipeline] stage 00:20:35.524 [Pipeline] { (Epilogue) 00:20:35.537 [Pipeline] sh 00:20:35.820 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:41.102 [Pipeline] catchError 00:20:41.104 [Pipeline] { 00:20:41.117 [Pipeline] sh 00:20:41.426 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:41.426 Artifacts sizes are good 00:20:41.434 [Pipeline] } 00:20:41.449 [Pipeline] // catchError 00:20:41.460 [Pipeline] archiveArtifacts 00:20:41.467 Archiving artifacts 00:20:41.555 [Pipeline] cleanWs 00:20:41.567 [WS-CLEANUP] Deleting project workspace... 00:20:41.567 [WS-CLEANUP] Deferred wipeout is used... 00:20:41.572 [WS-CLEANUP] done 00:20:41.574 [Pipeline] } 00:20:41.588 [Pipeline] // stage 00:20:41.593 [Pipeline] } 00:20:41.606 [Pipeline] // node 00:20:41.611 [Pipeline] End of Pipeline 00:20:41.650 Finished: SUCCESS